Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,300 | 4,892 | Lasso Screening Rules via Dual Polytope Projection
Jie Wang, Jiayu Zhou, Peter Wonka, Jieping Ye
Computer Science and Engineering
Arizona State University, Tempe, AZ 85287
{jie.wang.ustc, jiayu.zhou, peter.wonka, jieping.ye}@asu.edu
Abstract
Lasso is a widely used regression technique to find sparse representations. When
the dimension of the feature space and the number of samples are extremely large,
solving the Lasso problem remains challenging. To improve the efficiency of solving large-scale Lasso problems, El Ghaoui and his colleagues have proposed the
SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have 0 components in the solution vector. Then, the inactive predictors
or features can be removed from the optimization problem to reduce its scale. By
transforming the standard Lasso to its dual form, it can be shown that the inactive
predictors include the set of inactive constraints on the optimal dual solution. In
this paper, we propose an efficient and effective screening rule via Dual Polytope
Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual
space is a convex and closed polytope. Moreover, we show that our screening rule
can be extended to identify inactive groups in group Lasso. To the best of our
knowledge, there is currently no ?exact? screening rule for group Lasso. We have
evaluated our screening rule using many real data sets. Results show that our rule
is more effective in identifying inactive predictors than existing state-of-the-art
screening rules for Lasso.
1
Introduction
Data with various structures and scales comes from almost every aspect of daily life. To effectively
extract patterns in the data and build interpretable models with high prediction accuracy is always
desirable. One popular technique to identify important explanatory features is by sparse regularization. For instance, consider the widely used `1 -regularized least squares regression problem known
as Lasso [20]. The most appealing property of Lasso is the sparsity of the solutions, which is equivalent to feature selection. Suppose we have N observations and p predictors. Let y denote the N
dimensional response vector and X = [x1 , x2 , . . . , xp ] be the N ?p feature matrix. Let ? ? 0 be the
regularization parameter, the Lasso problem is formulated as the following optimization problem:
(1)
inf p 21 ky ? X?k22 + ?k?k1 .
??<
Lasso has achieved great success in a wide range of applications [5, 4, 28, 3, 23] and in recent years
many algorithms have been developed to efficiently solve the Lasso problem [7, 12, 18, 6, 10, 1, 11].
However, when the dimension of feature space and the number of samples are very large, solving
the Lasso problem remains challenging because we may not even be able to load the data matrix into
main memory. The idea of a screening test proposed by El Ghaoui et al. [8] is to first identify inactive
predictors that have 0 components in the solution and then remove them from the optimization.
Therefore, we can work on a reduced feature matrix to solve Lasso efficiently.
In [8], the ?SAFE? rule discards xi when
??
|xTi y| < ? ? kxi k2 kyk2 ?max
(2)
?max
T
where ?max = maxi |xi y| is the largest parameter such that the solution is nontrivial. Tibshirani et
al. [21] proposed a set of strong rules which were more effective in identifying inactive predictors.
1
The basic version discards xi if |xTi y| < 2? ? ?max . However, it should be noted that the proposed
strong rules might mistakenly discard active predictors, i.e., predictors which have nonzero coefficients in the solution vector. Xiang et al. [26, 25] developed a set of screening tests based on the
estimation of the optimal dual solution and they have shown that the SAFE rules are in fact a special
case of the general sphere test.
In this paper, we develop new efficient and effective screening rules for the Lasso problem; our
screening rules are exact in the sense that no active predictors will be discarded. By transforming
problem (1) to its dual form, our motivation is mainly based on three geometric observations in the
dual space. First, the active predictors belong to a subset of the active constraints on the optimal dual
solution, which is a direct consequence of the KKT conditions. Second, the optimal dual solution is
in fact the projection of the scaled response vector onto the feasible set of the dual variables. Third,
because the feasible set of the dual variables is closed and convex, the projection is nonexpansive
with respect to ? [2], which results in an effective estimation of its variation. Moreover, based on
the basic DPP rules, we propose the ?Enhanced DPP? rules which are able to detect more inactive
features than DPP. We evaluate our screening rules on real data sets from many different applications.
The experimental results demonstrate that our rules are more effective in discarding inactive features
than existing state-of-the-art screening rules.
2
Screening Rules for Lasso via Dual Polytope Projections
In this section, we present the basics of the dual formulation of problem (1) including its geometric
properties (Section 2.1). Based on the geometric properties of the dual optimal, we develop the
fundamental principle in Section 2.2 (Theorem 2), which can be used to construct screening rules
for Lasso. In section 2.3, we discuss the relation between dual optimal and LARS [7]. As a straightforward extension of DPP rules, we develop the sequential version of DPP (SDPP) in Section 2.4.
Moreover, we present enhanced DPP rules in Section 2.5.
2.1 Basics
Different from [26, 25], we do not assume y and all xi have unit length. We first transform problem
(1) to its dual form (to make the paper self-contained, we provide the detailed derivation of the dual
form in the supplemental n
materials):
o
sup
?
2
1
2 kyk2
?
?2
2 k?
? y? k22 : |xTi ?| ? 1, i = 1, 2, . . . , p
(3)
where ? is the dual variable. Since the feasible set, denoted by F , is the intersection of 2p halfspaces, it is a closed and convex polytope. From the objective function of the dual problem (3), it is
easy to see that the optimal dual solution ?? is a feasible ? which is closest to y? . In other words, ??
is the projection of y? onto the polytope F . Mathematically, for an arbitrary vector w and a convex
set C, if we define the projection function as
PC (w) = argmin ku ? wk2 ,
(4)
u?C
then
?? = PF (y/?) = argmin
? ? y?
2 .
(5)
??F
We know that the optimal primal and dual solutions satisfy:
y = X? ? + ???
and the KKT conditions for the Lasso problem (1) are
sign([? ? ]i ) if [? ? ]i 6= 0
(?? )T xi ?
[?1, 1] if [? ? ]i = 0
where [?]k denotes the k th component.
(6)
(7)
By the KKT conditions in Eq. (7), if the inner product (?? )T xi belongs to the open interval (?1, 1),
then the corresponding component [? ? ]i in the solution vector ? ? (?) has to be 0. As a result, xi is
an inactive predictor and can be removed from the optimization.
On the other hand, let ?H(xi ) = {z: zT xi = 1} and H(xi )? = {z: zT xi ? 1} be the hyperplane
and half space determined by xi respectively. Consider the dual problem (3); constraints induced
by each xi are equivalent to requiring each feasible ? to lie inside the intersection of H(xi )? and
H(?xi )? . If |(?? )T xi | = 1, i.e., either ?? ? ?H(xi )? or ?? ? ?H(?xi )? , we say the constraints
induced by xi are active on ?? .
2
We define the ?active? set on ?? as I?? = {i: |(?? )T xi | = 1, i ? I} where I = {1, 2, . . . , p}.
Otherwise, if ?? lies between ?H(xi ) and ?H(?xi ), i.e., |(?? )T xi | < 1, we can safely remove
xi from the problem because [? ? ]i = 0 according to the KKT conditions in Eq. (7). Similarly, the
?inactive? set on ?? is defined as I ?? = I \ I?? . Therefore, from a geometric perspective, if we
know ?? , i.e., the projection of y? onto F , the predictors in the inactive set on ?? can be discarded
from the optimization. It is worthwhile to mention that inactive predictors, i.e., predictors that have
0 components in the solution, are not the same as predictors in the inactive set. In fact, by the KKT
conditions, predictors in the inactive set must be inactive predictors since they are guaranteed to
have 0 components in the solution, but the converse may not be true.
2.2 Fundamental Screening Rules via Dual Polytope Projections
Motivated by the above geometric intuitions, we next show how to find the predictors in the inactive
set on ?? . To emphasize the dependence on ?, let us write ?? (?) and ? ? (?). If we know exactly
where ?? (?) is, it will be trivial to find the predictors in the inactive set. Unfortunately, in most of
the cases, we only have incomplete information about ?? (?) without actually solving problem (1) or
(3). Suppose we know the exact ?? (?0 ) for a specific ?0 . How can we estimate ?? (?00 ) for another ?00
and its inactive set? To answer this question, we start from Eq. (5); ?? (?) is nonexpansive because
it is a projection operator. For convenience, we cite the projection theorem in [2] as follows.
Theorem 1. Let C be a convex set, then the projection function defined in Eq. (4) is continuous and
nonexpansive, i.e.,
kPC (w2 ) ? PC (w1 )k2 ? kw2 ? w1 k2 , ?w2 , w1 .
(8)
Given ?? (?0 ), the next theorem shows how to estimate ?? (?00 ) and its inactive set for another parameter ?00 .
Theorem 2. For the Lasso problem, assume we are given the solution of its dual problem ?? (?0 ) for
a specific ?0 . Let ?00 be a nonnegative value different from ?0 . Then [? ?(?00 )]i = 0 if
|xTi ?? (?0 )| < 1 ? kxi k2 kyk2 ?10 ? ?100 .
(9)
Proof. From the KKT conditions in Eq. (7), we know |xTi ?? (?00 )| < 1 ? [? ? (?00 )]i = 0. By the
dual problem (3), ?? (?) is the projection of y? onto the feasible set F . According to the projection
theorem [2], that is, Theorem 1, for closed convex
sets,
?? (?) is continuous
and
1
nonexpansive, i.e.,
y
y
? 00
? 0
1
k? (? ) ? ? (? )k2 ? ?00 ? ?0 2 = kyk2 ?00 ? ?0
(10)
Then
|xTi ?? (?00 )| ? |xTi ?? (?00 ) ? xTi ?? (?0 )| + |xTi ?? (?0 )|
(11)
< kxi k2 k(?? (?00 ) ? ?? (?0 ))k2 + 1 ? kxi k2 kyk2 ?100 ? ?10
1
? kxi k2 kyk2 ?100 ? ?0
+ 1 ? kxi k2 kyk2 ?100 ? ?10 = 1
which completes the proof.
From theorem 2, it is easy to see our rule is quite flexible since every ?? (?0 ) would result in a new
screening rule. And the smaller the gap between ?0 and ?00 , the more effective the screening rule is.
By ?more effective?, we mean a stronger capability of identifying inactive predictors.
As an example, let us find out ?? (?max ). Recall that ?max = maxi |xTi y|. It is easy to verify
y
y
y
?
?max is itself feasible. Therefore the projection of ?max onto F is itself, i.e., ? (?max ) = ?max .
Moreover, by noting that for ?? > ?max , we have |xTi y/?| < 1, i ? I, i.e., all predictors are in the
inactive set at ?? (?), we conclude that the solution to problem (1) is 0. Combining all these together
y
and plugging ?? (?max ) = ?max
into Eq. (9), we obtain the following screening rule.
Corollary 3. DPP: For the Lasso problem (1), let ?max = maxi |xTi y|. If ? ? ?max , then [? ? ]i =
0, ?i ? I. Otherwise, [? ? (?)]
i = 0 if
T y
1
.
xi ?max < 1 ? kxi k2 kyk2 ?1 ? ?max
Clearly, DPP is most effective when ? is close to ?max . So how can we find a new ?? (?0 ) with
?0 < ?max ? Note that Eq. (6) is in fact a natural bridge which relates the primal and dual optimal
solutions. As long as we know ? ? (?0 ), it is easy to get ?? (?0 ) when ? is relatively small, e.g., LARS
[7] and Homotopy [17] algorithms.
3
Table 1: Illustration of the running time for DPP screening and for solving the Lasso problem after
screening. Ts : time for screening. Tl : time for solving the Lasso problem after screening. To :
the total time. Entries of the response vector y are i.i.d. by a standard Gaussian. Columns of the
data matrix X ? <1000?100000 are generated by xi = y + ?z where ? is a random number drawn
uniformly from [0, 1]. Entries of z are i.i.d. by a standard Gaussian. ?max = 0.95 and ?/?max =0.5.
Ts ( S )
Tl ( S )
To ( S )
L ASSO
?
?
103.314
DPP
0.035
10.250
10.285
DPP2
0.073
9.634
9.707
DPP5
0.152
8.399
8.552
DPP10
0.321
1.369
1.690
DPP20
0.648
0.121
0.769
Remark: Xiang et al. [26] developed a general sphere test which says that if ?? is estimated to be
inside a ball k?? ? qk2 ? r, then |xTi q| < (1 ? r) ? [? ? ]i = 0. Considering the DPP rules in
Theorem 2, it is equivalent to setting q = ?? (?0 ) and r = | ?10 ? ?100 |. Therefore, different from the
sphere test and Dome developed in [26, 25] with the radius r fixed at the beginning, the construction
of our DPP rules is equivalent to an ?r? decreasing process. Clearly, the smaller r is, the more
effective the DPP rules will be.
Remark: Notice that, DPP is not the same as ST1 [26] and SAFE [8], which discards the ith feature
??
if |xTi y| < ? ? kxi k2 kyk2 ?max
?max . From the perspective of the sphere test, the radius of ST1/SAFE
and DPP are the same. But the centers of ST1 and DPP are y/? and y/?max respectively, which
leads to different formulas, i.e., Eq. (2) and Corollary 3.
2.3 DPP Rules with LARS/Homotopy Algorithms
It is well known that under mild conditions, the set {? ? (?) : ? > 0} (also know as regularization
path [15]) is continuous piecewise linear [17, 7, 15]. The output of LARS or Homotopy algorithms is
in fact a sequence of values like (? ? (?(0) ), ?(0) ), (? ? (?(1) ), ?(1) ), . . ., where ? ? (?(i) ) corresponds
to the ith breakpoint of the regularization path {? ? (?) : ? > 0} and ?(i) s are monotonically decreasing. By Eq. (6), once we get ? ? (?(i) ), we can immediately compute ?? (?(i) ). Then according
to Theorem 2, we can construct a DPP rule based on ?? (?(i) ) and ?(i) . For convenience, if the DPP
rule is built based on ?? (?(i) ), we add the index i as suffix to DPP, e.g., DPP5 means it is developed
based on ?? (?(5) ). It should be noted that LARS or Homotopy algorithms are very efficient to find
the first few breakpoints of the regularization path and the corresponding parameters. For the first
few breakpoints, the computational cost is roughly O(N p), i.e., linear with the size of the data matrix X. In Table 1, we report both the time used for screening and the time needed to solve the Lasso
problem after screening. The Lasso solver is from the SLEP [14] package.
From Table 1, we can see that compared with the time saved by the screening rules, the time used
for screening is negligible. The efficiency of the Lasso solver is improved by DPP20 more than
130 times. In practice, DPP rules built on the first few ?? (?(i) )?s lead to more significant performance improvement than existing state-of-art screening tests. We will demonstrate the effectiveness
of our DPP rules in the experiment section. As another useful property of LARS/Homotopy algorithms, it is worthwhile to mention that changes of the active set only happen at the breakpoints
[17, 7, 15]. Consequently, given the parameters corresponding to a pair of adjacent breakpoints, e.g.,
?(i) and ?(i+1) , the active set for ? ? (?(i+1) , ?(i) ) is the same as ? = ?(i) . Therefore, besides the
sequence of breakpoints and the associated parameters (? ? (?(0) ), ?(0) ), . . . (? ? (?(k) ), ?(k) ) computed by LARS/Homotopy algorithms, we know the active set for ?? ? ?(k) . Hence we can remove
the predictors in the inactive set from the optimization problem (1). This scheme has been embedded
in DPP rules.
Remark: Some works, e.g., [21], [8], solve several Lasso problems for different parameters to
improve the screening performance. However, the DPP algorithms do not aim to solve a sequence
of Lasso problems, but just to accelerate one. The LARS/Homotopy algorithms are used to find the
first few breakpoints of the regularization path and the corresponding parameters, instead of solving
general Lasso problems. Thus, different from [21], [8] who need to iteratively compute a screening
step and a Lasso step, DPP algorithms only compute one screening step and one Lasso step.
2.4 Sequential Version of DPP Rules
Motivated by the ideas of [21] and [8], we can develop a sequential version of DPP rules. In other
words, if we are given a sequence of parameter values ?1 > ?2 > . . . > ?m , we can first apply
DPP to discard inactive predictors for the Lasso problem (1) with parameter being ?1 . After solving
4
the reduced optimization problem for ?1 , we obtain the exact solution ? ? (?1 ). Hence by Eq. (6),
we can find ?? (?1 ). According to Theorem 2, once we know the optimal dual solution ?? (?1 ), we
can construct a new screening rule to identify inactive predictors for problem (1) with ? = ?2 . By
repeating the above process, we obtain the sequential version of the DPP rule (SDPP).
Corollary 4. SDPP: For the Lasso problem (1), suppose we are given a sequence of parameter
values ?max = ?0 > ?1 > . . . > ?m . Then for any integer 0 ? k < m, we have [? ? (?k+1 )]i = 0
if ? ? (?k ) is known and the
holds:
following
T y?X? ? (?k )
1
? ?1k .
xi
< 1 ? kxi k2 kyk2 ?k+1
?k
Remark: There are some other related works on screening rules, e.g., Wu et al. [24] built screening
rules for `1 penalized logistic regression based on the inner products between the response vector
and each predictor; Tibshirani et al. [21] developed strong rules for a set of Lasso-type problems via
the inner products between the residual and predictors; in [9], Fan and Lv studied screening rules
for Lasso and related problems. But all of the above works may mistakenly discard predictors that
have non-zero coefficients in the solution. Similar to [8, 26, 25], our DPP rules are exact in the
sense that the predictors discarded by our rules are inactive predictors, i.e., predictors that have zero
coefficients in the solution.
2.5 Enhanced DPP Rules
In this section, we show how to further improve the DPP rules. From the inequality in (9), we can
see that the larger the right hand side is, the more inactive features can be detected. From the proof
of Theorem 2, we need to make the right hand side of the inequality in (10) as small as possible. By
noting that ?? (?0 ) = PF ( ?y0 ) and ?? (?00 ) = PF ( ?y00 ) [please refer to Eq. (5)], the inequality in (10)
is in fact a direct consequence of Theorem 1 by letting C := F , w1 := ?y0 and w2 := ?y00 .
On the other hand, suppose ?y0 ?
/ F , i.e., ?0 ? (0, ?max ). It is clear that ?y0 6= PF ( ?y0 ) = ?? (?0 ). Let
y
?(t) = ?? (?0 ) + t( ?0 ? ?? (?0 )) for t ? 0, i.e., ?(t) is a point lying on the ray starting from ?? (?0 )
and pointing to the same direction as ?y0 ? ?? (?0 ). We can observe that PF (?(t)) = ?? (?0 ), i.e., the
projection of ?(t) onto the set F is ?? (?0 ) as well (please refer to Lemma A in the supplement for
details). By applying Theorem 1 again, we have
k?? (?00 )??? (?0 )k2 = kPF ( ?y00 )?PF (?(t))k2 ? k ?y00 ??(t)k2 = kt( ?y0 ??? (?0 ))?( ?y00 ??? (?0 ))k2 .
(12)
Clearly, when t = 1, the inequality in (12) reduces to the one in (10). Because the inequality in (12)
holds for all t ? 0, we may get a tighter bound by
k?? (?00 ) ? ?? (?0 )k2 ? min ktv1 ? v2 k2 ,
(13)
t?0
where v1 = ?y0 ? ?? (?0 ) and v2 = ?y00 ? ?? (?0 ). When ?0 = ?max , we can set v1 = sign(xT? y)x?
where x? := argmaxxi |xTi y| (please refer to Lemma B in the supplement for details). The minimization problem on the right hand side of the inequality (13) can be easily solved as follows:
(
kv
if hv1 , v2 i < 0,
2 k2 ,
0
00
min ktv1 ? v2 k2 = ?(? , ? ) =
(14)
hv1 ,v2 i
t?0
v2 ? kv1 k2 v1
, otherwise.
2
2
Similar to Theorem 2, we have the following result:
Theorem 5. For the Lasso problem, assume we are given the solution of its dual problem ?? (?0 ) for
a specific ?0 . Let ?00 be a nonnegative value different from ?0 . Then [? ? (?00 )]i = 0 if
|xTi ?? (?0 )| < 1 ? kxi k2 ?(?0 , ?00 ).
(15)
As we explained above, the right hand side of the inequality (15) is no less than that of the inequality
(9). Thus, the enhanced DPP is able to detect more inactive features than DPP. The analogues of
Corollaries 3 and 4 can be easily derived as well.
Corollary 6. DPP? : For the Lasso problem (1), let ?max = maxi |xTi y|. If ? ? ?max , then
[? ? ]i = 0, ?i ? I. Otherwise, [? ? (?)]i = 0 if the following holds:
T y
xi ?max < 1 ? kxi k2 ?(?max , ?).
Corollary 7. SDPP? : For the Lasso problem (1), suppose we are given a sequence of parameter
values ?max = ?0 > ?1 > . . . > ?m . Then for any integer 0 ? k < m, we have [? ? (?k+1 )]i = 0
5
if ? ? (?k ) is known and the following
holds:
T y?X? ? (?k )
xi
< 1 ? kxi k2 ?(?k , ?k+1 ).
?k
To simplify notations, we denote the enhanced DPP and SDPP by DPP? and SDPP? respectively.
3
Extensions to Group Lasso
To demonstrate the flexibility of DPP rules, we extend our idea to the group Lasso problem [27]:
inf p 21 ky ?
XG
g=1
??<
Xg ?g k22 + ?
XG
g=1
?
ng k?g k2 ,
(16)
PG
where Xg ? <N ?ng is the data matrix for the gth group and p = g=1 ng . The corresponding dual
problem of (16) is (seendetailed derivation in the supplemental materials):
o
?
y 2
2
T
1
?2
sup
kyk
?
k?
?
k
:
kX
?k
?
n
,
g
=
1,
2,
.
.
.
,
G
(17)
2
g
2
g
2
2
? 2
?
Similar to the Lasso problem, the primal and dual optimal solutions of the group Lasso satisfy:
y=
and the KKT conditions are:
XG
g=1
(?
? T
(? ) Xg ?
?
Xg ?g? + ???
(18)
??
ng k? ?gk2 if ?g? 6= 0
g
ng u, kuk2 ? 1 if ?g? = 0
?
? T
for g = 1, 2, . . . , G. Clearly, if k(? ) Xg k2 < ng , we can conclude that ?g? = 0.
(19)
Consider problem (17). It is easy to see that the dual optimal ?? is the projection of y? onto the
?
feasible set. For each g, the constraint kXTg ?k2 ? ng confines ? to an ellipsoid which is closed
and convex. Therefore, the feasible set of the dual problem (17) is the intersection of ellipsoids and
thus closed and convex. Hence ?? (?) is also nonexpansive for the group lasso problem. Similar to
Theorem 2, we can readily develop the following theorem for group Lasso.
Theorem 8. For the group Lasso problem, assume we are given the solution of its dual problem
?? (?0 ) for a specific ?0 . Let ?00 be a nonnegative value different from ?0 . Then ?g? (?00 ) = 0 if
?
(20)
kXTg ?? (?0 )k2 < ng ? kXg kF kyk2 ?10 ? ?100
?
y
Similar to the Lasso problem, let ?max = maxg kXTg yk2 / ng , we can see that ?max
is itself
feasible, and ?max is the largest parameter such that problem (16) has a nonzero solution. Clearly,
y
. Similar to DPP and SDPP, we can construct GDPP and SGDPP for group Lasso.
?? (?max ) = ?max
?
Corollary 9. GDPP: For the group Lasso problem (16), let ?max = maxg kXTg yk2 / ng . If
?
?
? ? ?max , ?g (?) = 0, ?g = 1, 2, . . . , G. Otherwise, we have ?g (?) = 0 if the following holds:
?
T y
1
.
(21)
Xg ?max
< ng ? kXg kF kyk2 ?1 ? ?max
2
Corollary 10. SGDPP: For the group Lasso problem (16), suppose we are given a sequence of
parameter values ?max = ?0 > ?1 > . . . > ?m . For any integer 0 ? k < m, we have ?g? (?k+1 ) =
0 if ? ? (?k ) is known
following holds:
and the
?
T y?PG
?
g=1 Xg ?g (?k )
1
1
Xg
<
n
?
kX
k
kyk
?
(22)
g
g
F
2
?k
?k+1
?k .
2
Remark: Similar to DPP? , we can develop the enhanced GDPP by simply replacing the term
kyk2 (1/? ? 1/?max ) on the right hand side of the inequality (21) with ?(?max , ?). Notice that,
?
to compute ?(?max , ?), we set v1 = X? (X? )T y where X? = argmaxXg kXTg yk2 / ng (please
?
refer to Lemma C in the supplement for details). The analogs of SDPP , that is, SGDPP? , can be
obtained by replacing the term kyk2 (1/?k+1 ? 1/?k ) on the right hand side of the inequality (22)
with ?(?k , ?k+1 ).
4
Experiments
In section 4.1, we first evaluate the DPP and DPP? rules on both real and synthetic data. We then
compare the performance of DPP with Dome (see [25, 26]) which achieves state-of-art performance
for the Lasso problem among exact screening rules [25]. We evaluate GDPP and SGDPP for the
group Lasso problem on three synthetic data sets in section 4.2. We are not aware of any ?exact?
screening rules for the group Lasso problem at this point.
6
(a) MNIST-DPP2/DPP? 2 (b) MNIST-DPP5/DPP? 5
(c) COIL-DPP2/DPP? 2
(d) COIL-DPP5/DPP? 5
Figure 1: Comparison of DPP and DPP? rules on the MNIST and COIL data sets.
To measure the performance of our screening rules, we compute the rejection rate, i.e., the ratio between the number of predictors discarded by screening rules and the actual number of zero predictors
in the ground truth. Because the DPP rules are exact, i.e., no active predictors will be mistakenly
discarded, the rejection rate will be less than one. For SAFE and Dome, it is not straightforward
to extend them to the group Lasso problem. Similarly to previous works [26], we do not report the
computational time saved by screening because it can be easily computed from the rejection ratio.
Specifically, if the Lasso solver is linear in terms of the size of the data matrix X, a K% rejection
of the data can save K% computational time. The general experiment settings are as follows. For
each data set, after we construct the data matrix X and the response y, we run the screening rules
along a sequence of 100 values equally spaced on the ?/?max scale from 0 to 1. We repeat the
procedure 100 times and report the average performance at each of the 100 values of ?/?max . All
of the screening rules are implemented in Matlab. The experiments are carried out on a Intel(R)
(i7-2600) 3.4Ghz processor.
4.1 DPPs and DPP? s for the Lasso Problem
In this experiment, we first compare the performance of the proposed DPP rules with the enhanced
DPP rules (DPP? ) on (a) the MNIST handwritten digit data set [13]; (b) the COIL rotational image
data set [16] in Section 4.1.1. We show that the DPP? rules are more effective in identifying inactive
features than the DPP rules. This demonstrate our theoretical results in Section 2.5. Then we
evaluate the DPP? /SDPP? rules and Dome on (c) the ADNI data set; (d) the Olivetti Faces data set
[19]; (e) Yahoo web pages data sets [22] and (f) a synthetic data set whose entries are i.i.d. by a
standard Gaussian.
4.1.1 Comparison of DPP and DPP?
As we explain in Section 2.5, all inactive feature detected by the DPP rules can also be detected
by the DPP? rules. But conversely, it is not necessarily true. To demonstrate the advantage of the
DPP? rules, we run DPP2, DPP? 2, DPP5 and DPP? 5 on the MNIST and COIL data sets. a) The
MNIST data set contains grey images of scanned handwritten digits, including 60, 000 for training
and 10, 000 for testing. The dimension of each image is 28 ? 28. Each time, we first randomly select
100 images for each digit (and in total we have 1000 images) and get a data matrix X ? <784?1000 .
Then we randomly pick an image as the response y ? <784 . b) The COIL data set includes 100
objects, each of which has 72 color images with 128?128 pixels. The images that belong to the same
object are taken every 5 degree by rotating the object. We use the images of object 10. Each time,
we randomly pick one of the images as the response vector y ? <49152 and use all the remaining
ones to construct the data matrix X ? <49152?71 . The average ?max for the so cultured MNIST and
the COIL data sets are 0.837 and 0.986. Clearly, the predictors in the data sets are high correlated.
From Figure 1, we observe that DPP? 2 significantly outperforms DPP2 for both data sets, especially
when ?/?max is small. We also observe the same pattern for DPP5 and DPP? 5, verifying the claims
about DPP? made in the paper. Thus, in the following experiments, we only report the performance
of DPP? and the competing algorithm Dome.
4.1.2 Comparison of DPP? /SDPP? and Dome
In this experiment, we compare DPP? /SDPP? rules with Dome. We only report the performance of
DPP? 5 and DPP? 10 among the family of DPP? rules on the following four data sets.
c) The Alzheimer?s disease neuroimaging initiative (ADNI; available at www.loni.ucla.edu/ADNI)
studies the disease progression of Alzheimer?s. The ADNI data set includes 434 patients with 306
features extracted from their baseline MRI scans. Each time we randomly select 90% samples to
construct the data matrix X ? <391?306 . The response y is the patients? MMSE cognitive scores
[29]. d) The Olivetti faces data set includes 400 grey scale face images of size 64 ? 64 for 40 people
(10 for each). Each time, we randomly take one of the images as the response vector y ? <4096
7
(a) ADNI
(b) Olivetti
(c) Yahoo-Computers
(d) Synthetic
Figure 2: Comparison of DPP? /SDPP? rules and Dome on three real data sets, Yahoo computers
data set, ADNI data set, Olivetti face data set and one synthetic data set.
(a) 20 groups
(b) 50 groups
(c) 100 groups
Figure 3: Performance of GDPP and SGDPP applied to three synthetic data sets.
and the data matrix X ? <4096?399 is constructed by the left ones. e) The Yahoo data sets include
11 top-level categories such as Computers, Education, Health, Recreation, and Science etc. Each
category is further divided into a set of subcategories. Each time, we construct a balanced binary
classification data set from the topic of Computers. We choose samples from one subcategory as the
positive class and randomly sample an equal number of samples from the rest of subcategories as
the negative class. The size of the data matrix is 876 ? 25259 and the response vector is the binary
label of the samples. f) For the synthetic data set X ? <100?5000 and the response vector y ? <100 ,
all of the entries are i.i.d. by a standard Gaussian.
The average ?max of the above three data sets are 0.7273, 0.989, 0.914, and 0.371 respectively.
The predictors in ADNI, Yahoo-Computers and Olivetti data sets are highly correlated as indicated
by the average ?max . In contrast with the real data sets, the average ?max of the synthetic data is
small. As noted in [26, 25], Dome is very effective in discarding inactive features when ?max is
large. From Fig. 2, we observe that Dome performs much better on the real data sets compared to
the synthetic data. However, the proposed rules are able to identify far more inactive features than
Dome on both real and synthetic data, even for the cases in which ?max is small.
4.2
GDPPs for the Group Lasso Problem
We apply GDPPs to three synthetic data sets. The entries of data matrix X ? <100?1000 and the
response vector y are generated i.i.d. from the standard Gaussian distribution. For each of the
cases, we randomly divided X into 20, 50, and 100 groups. We compare the performance of GDPP
and SGDPP along a sequence of 100 parameter values equally spaced on the ?/?max scale. We
repeat the above procedure 100 times for each of the cases and report the average performance. The
average ?max values are 0.136, 0.167, and 0.219 respectively. As shown in Fig. 3, it is expected
that SGDPP significantly outperforms GDPP which only makes use of the information of the dual
optimal solution at a single point. For more discussions, please refer to the supplement.
5
Conclusion
In this paper, we develop new screening rules for the Lasso problem by making use of the nonexpansiveness of the projection operator with respect to a closed convex set. Our new methods, i.e.,
DPP rules, are able to effectively identify inactive predictors of the Lasso problem, thus greatly reducing the size of the optimization problem. Moreover, we further improve DPP rules and propose
the enhanced DPP rules, that is, the DPP? rules, which are even more effective in discarding inactive
predictors than DPP rules. The idea of DPP and DPP? rules can be easily generalized to screen the
inactive groups of the group Lasso problem. Extensive experiments on both synthetic and real data
demonstrate the effectiveness of the proposed rules. Moreover, DPP and DPP? rules can be combined with any Lasso solver as a speedup tool. In the future, we plan to generalize our idea to other
sparse formulations consisting of more general structured sparse penalties, e.g., tree/graph Lasso.
Acknowledgments
This work was supported in part by NIH (LM010730) and NSF (IIS-0953662, CCF-1025177).
8
References
[1] S. R. Becker, E. Cand`es, and M. Grant. Templates for convex cone problems with applications to sparse
signal recovery. Technical report, Standford University, 2010.
[2] D. P. Bertsekas. Convex Analysis and Optimization. Athena Scientific, 2003.
[3] A. Bruckstein, D. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling
of signals and images. SIAM Review, 51:34?81, 2009.
[4] E. Cand`es. Compressive sampling. In Proceedings of the International Congress of Mathematics, 2006.
[5] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Review,
43:129?159, 2001.
[6] D. L. Donoho and Y. Tsaig. Fast solution of l-1 norm minimization problems when the solution may be
sparse. IEEE Transactions on Information Theory, 54:4789?4812, 2008.
[7] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407?
499, 2004.
[8] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Pacific
Journal of Optimization, 8:667?698, 2012.
[9] J. Fan and J. Lv. Sure independence screening for ultrahigh dimensional feature spaces. Journal of the
Royal Statistical Society Series B, 70:849?911, 2008.
[10] J. Friedman, T. Hastie, H. H?efling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied
Statistics, 1:302?332, 2007.
[11] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33:1?22, 2010.
[12] S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large scale
l1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing, 1:606?617, 2007.
[13] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
In Proceedings of the IEEE, 1998.
[14] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009.
[15] J. Mairal and B. Yu. Complexity analysis of the lasso regularization path. In ICML, 2012.
[16] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (coil). Technical report, No.
CUCS-006-96, Dept. Comp. Science, Columbia University, 1996.
[17] M. R. Osborne, B. Presnell, and B. A. Turlach. A new approach to variable selection in least squares
problems. IMA Journal of Numerical Analysis, 20:389?404, 2000.
[18] M. Y. Park and T. Hastie. L1-regularized path algorithm for generalized linear models. Journal of the
Royal Statistical Society Series B, 69:659?677, 2007.
[19] F. Samaria and A. Harter. Parameterisation of a stochastic model for human face identification. In
Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, 1994.
[20] R. Tibshirani. Regression shringkage and selection via the lasso. Journal of the Royal Statistical Society
Series B, 58:267?288, 1996.
[21] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. Tibshirani. Strong rules for
discarding predictors in lasso-type problems. Journal of the Royal Statistical Society Series B, 74:245?
266, 2012.
[22] N. Ueda and K. Saito. Parametric mixture models for multi-labeled text. Advances in neural information
processing systems, 15:721?728, 2002.
[23] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan. Sparse representation for computer vision
and pattern recognition. In Proceedings of IEEE, 2010.
[24] T. T. Wu, Y. F. Chen, T. Hastie, E. Sobel, and K. Lange. Genomewide association analysis by lasso
penalized logistic regression. Bioinformatics, 25:714?721, 2009.
[25] Z. J. Xiang and P. J. Ramadge. Fast lasso screening tests based on correlations. In IEEE ICASSP, 2012.
[26] Z. J. Xiang, H. Xu, and P. J. Ramadge. Learning sparse representation of high dimensional data on large
scale dictionaries. In NIPS, 2011.
[27] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society Series B, 68:49?67, 2006.
[28] P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research,
7:2541?2563, 2006.
[29] J. Zhou, L. Yuan, J. Liu, and J. Ye. A multi-task learning formulation for predicting disease progression.
In KDD, pages 814?822. ACM, 2011.
9
| 4892 |@word mild:1 version:5 mri:1 stronger:1 norm:1 turlach:1 nd:1 open:1 grey:2 decomposition:1 pg:2 pick:2 mention:2 liu:2 contains:1 score:1 series:5 document:1 mmse:1 outperforms:2 existing:3 must:1 readily:1 numerical:1 happen:1 kpf:1 kdd:1 remove:3 kv1:1 interpretable:1 half:1 asu:1 selected:1 kyk:2 beginning:1 ith:2 rabbani:1 along:2 constructed:1 direct:2 initiative:1 yuan:2 ray:1 inside:2 expected:1 roughly:1 cand:2 multi:2 decreasing:2 xti:17 actual:1 pf:6 considering:1 solver:4 moreover:6 notation:1 argmin:2 developed:6 compressive:1 supplemental:2 gth:1 safely:1 sapiro:1 every:3 exactly:1 k2:29 scaled:1 unit:1 converse:1 grant:1 bertsekas:1 positive:1 negligible:1 engineering:1 congress:1 consequence:2 tempe:1 path:7 might:1 studied:1 wk2:1 conversely:1 challenging:2 ramadge:2 range:1 kw2:1 acknowledgment:1 lecun:1 testing:1 atomic:1 practice:1 digit:3 procedure:2 saito:1 yan:1 significantly:2 projection:19 boyd:1 word:2 get:4 onto:7 convenience:2 selection:5 operator:2 close:1 interior:1 applying:1 www:1 equivalent:4 jieping:2 center:1 straightforward:2 starting:1 convex:11 identifying:4 immediately:1 recovery:1 rule:82 his:1 variation:1 coordinate:2 annals:2 enhanced:8 suppose:6 construction:1 cultured:1 exact:8 recognition:2 labeled:1 wang:2 solved:1 verifying:1 removed:2 halfspaces:1 disease:3 intuition:1 transforming:2 balanced:1 complexity:1 solving:8 efficiency:2 basis:1 accelerate:1 easily:4 icassp:1 various:1 derivation:2 samaria:1 fast:2 effective:13 detected:3 saunders:1 quite:1 whose:1 widely:2 solve:5 larger:1 say:2 elad:1 otherwise:5 statistic:2 transform:1 itself:3 sequence:9 advantage:1 propose:3 product:3 combining:1 sdpp:12 flexibility:1 shringkage:1 kv:1 harter:1 az:1 ky:2 object:5 develop:7 eq:11 strong:4 implemented:1 murase:1 come:1 direction:1 safe:7 radius:2 saved:2 lars:8 stochastic:1 human:1 material:2 elimination:1 education:1 st1:3 homotopy:7 tighter:1 mathematically:1 extension:2 hold:6 lying:1 y00:6 ground:1 wright:1 great:1 claim:1 pointing:1 gk2:1 genomewide:1 achieves:1 dictionary:1 uniqueness:1 estimation:3 kpc:1 standford:1 label:1 currently:1 bridge:1 largest:2 grouped:1 tool:1 minimization:2 clearly:6 always:1 gaussian:5 aim:1 zhou:3 corollary:8 derived:1 improvement:1 mainly:2 greatly:1 contrast:1 baseline:1 sense:2 detect:2 kim:1 el:3 suffix:1 explanatory:1 relation:1 pixel:1 dual:36 flexible:1 among:2 denoted:1 classification:1 yahoo:5 plan:1 art:4 special:1 equal:1 construct:8 once:2 aware:1 ng:12 sampling:1 park:1 yu:2 icml:1 breakpoint:1 hv1:2 future:1 report:8 piecewise:1 simplify:1 few:4 randomly:7 ima:1 consisting:1 friedman:3 screening:44 highly:1 recreation:1 mixture:1 pc:2 primal:3 sobel:1 kt:1 daily:1 tree:1 incomplete:1 taylor:1 rotating:1 theoretical:1 instance:1 column:1 modeling:1 cost:1 subset:1 entry:5 predictor:40 slep:2 answer:1 kxi:12 synthetic:12 combined:1 fundamental:2 siam:2 international:1 gorinevsky:1 together:1 quickly:1 qk2:1 w1:4 again:1 choose:1 huang:1 cognitive:1 zhao:1 includes:3 coefficient:3 satisfy:2 closed:7 jiayu:2 sup:2 start:1 capability:1 simon:1 kxg:2 square:3 accuracy:1 who:1 efficiently:2 spaced:2 identify:7 generalize:1 handwritten:2 identification:1 comp:1 processor:1 explain:1 nene:1 colleague:1 proof:3 associated:1 popular:1 recall:1 knowledge:1 color:1 efron:1 actually:1 supervised:1 response:12 improved:1 loni:1 formulation:3 evaluated:1 just:1 correlation:1 hand:8 mistakenly:3 replacing:2 web:1 logistic:2 indicated:1 scientific:1 ye:4 k22:3 requiring:1 true:2 verify:1 ccf:1 regularization:8 hence:3 nonzero:2 iteratively:1 adjacent:1 kyk2:14 self:1 please:5 noted:3 generalized:3 demonstrate:6 performs:1 l1:2 image:14 nih:1 ji:1 belong:2 extend:2 analog:1 association:1 significant:1 refer:5 dpps:1 consistency:1 mathematics:1 similarly:2 yk2:3 etc:1 add:1 closest:1 recent:1 perspective:2 olivetti:5 inf:2 belongs:1 discard:6 inequality:10 binary:2 success:1 life:1 monotonically:1 signal:3 ii:1 relates:1 desirable:1 reduces:1 asso:1 technical:2 adni:7 sphere:4 long:1 lin:1 divided:2 equally:2 plugging:1 prediction:1 regression:7 basic:4 patient:2 vision:2 achieved:1 interval:1 completes:1 w2:3 rest:1 sure:1 induced:2 effectiveness:2 integer:3 alzheimer:2 noting:2 bengio:1 easy:5 independence:1 hastie:6 lasso:67 competing:1 reduce:1 idea:5 inner:3 haffner:1 lange:1 i7:1 inactive:36 motivated:2 becker:1 presnell:1 penalty:1 peter:2 remark:5 matlab:1 jie:2 useful:1 detailed:1 clear:1 repeating:1 category:2 reduced:2 nsf:1 notice:2 sign:2 estimated:1 tibshirani:8 write:1 group:23 four:1 lustig:1 drawn:1 viallon:1 v1:4 graph:1 year:1 cone:1 run:2 package:1 angle:1 almost:1 family:1 wu:2 ueda:1 bound:1 breakpoints:6 guaranteed:1 fan:2 arizona:2 nonnegative:3 nontrivial:1 scanned:1 constraint:5 x2:1 software:1 ucla:1 aspect:1 extremely:1 min:2 relatively:1 speedup:1 structured:1 pacific:1 according:4 ball:1 nonexpansive:5 smaller:2 y0:8 lm010730:1 appealing:1 parameterisation:1 making:1 explained:1 ghaoui:3 koh:1 taken:1 equation:1 remains:2 discus:1 needed:1 know:9 letting:1 available:1 pursuit:1 apply:2 observe:4 worthwhile:2 v2:6 progression:2 save:1 denotes:1 running:1 include:2 remaining:1 top:1 k1:1 build:1 especially:1 society:5 objective:1 argmaxxi:1 question:1 parametric:1 dependence:1 gradient:1 athena:1 topic:2 polytope:7 evaluate:4 trivial:1 length:1 besides:1 index:1 illustration:1 ellipsoid:2 ratio:2 rotational:1 unfortunately:1 neuroimaging:1 wonka:2 negative:1 zt:2 subcategory:1 observation:2 discarded:5 descent:1 t:2 extended:1 arbitrary:1 pair:1 extensive:1 tsaig:1 cucs:1 nip:1 able:6 pattern:3 sparsity:1 bien:1 built:3 max:56 memory:1 including:2 royal:5 analogue:1 natural:1 regularized:3 predicting:1 residual:1 scheme:1 improve:4 library:1 xg:11 carried:1 extract:1 health:1 columbia:2 text:1 review:2 geometric:5 kf:2 xiang:4 ultrahigh:1 embedded:1 subcategories:2 lv:2 degree:1 xp:1 principle:1 penalized:2 repeat:2 supported:1 side:6 johnstone:1 wide:1 template:1 face:5 sparse:12 ghz:1 dpp:84 dimension:3 made:1 dome:11 far:1 transaction:1 emphasize:1 bruckstein:1 active:10 kkt:7 mairal:2 conclude:2 xi:29 continuous:3 maxg:2 table:3 ku:1 correlated:2 bottou:1 necessarily:1 main:1 motivation:1 osborne:1 x1:1 xu:1 fig:2 intel:1 tl:2 screen:1 lie:2 third:1 theorem:19 formula:1 kuk2:1 load:1 discarding:4 specific:4 xt:1 maxi:4 workshop:1 mnist:7 sequential:4 effectively:2 supplement:4 kx:2 gap:1 chen:2 rejection:4 intersection:3 simply:1 contained:1 pathwise:1 cite:1 corresponds:1 truth:1 extracted:1 ma:1 coil:8 acm:1 formulated:1 consequently:1 donoho:3 feasible:11 change:1 determined:1 specifically:1 uniformly:1 reducing:1 hyperplane:1 lemma:3 total:2 experimental:1 e:2 select:2 people:1 ustc:1 scan:1 confines:1 bioinformatics:1 dept:1 nayar:1 |
4,301 | 4,893 | A Kernel Test for Three-Variable Interactions
Dino Sejdinovic, Arthur Gretton
Gatsby Unit, CSML, UCL, UK
{dino.sejdinovic, arthur.gretton}@gmail.com
Wicher Bergsma
Department of Statistics, LSE, UK
[email protected]
Abstract
We introduce kernel nonparametric tests for Lancaster three-variable interaction
and for total independence, using embeddings of signed measures into a reproducing kernel Hilbert space. The resulting test statistics are straightforward to
compute, and are used in powerful interaction tests, which are consistent against
all alternatives for a large family of reproducing kernels. We show the Lancaster
test to be sensitive to cases where two independent causes individually have weak
influence on a third dependent variable, but their combined effect has a strong
influence. This makes the Lancaster test especially suited to finding structure in
directed graphical models, where it outperforms competing nonparametric tests in
detecting such V-structures.
1 Introduction
The problem of nonparametric testing of interaction between variables has been widely treated in
the machine learning and statistics literature. Much of the work in this area focuses on measuring
or testing pairwise interaction: for instance, the Hilbert-Schmidt Independence Criterion (HSIC) or
Distance Covariance [1, 2, 3], kernel canonical correlation [4, 5, 6], and mutual information [7].
In cases where more than two variables interact, however, the questions we can ask about their
interaction become significantly more involved. The simplest case we might consider is whether the
Qd
variables are mutually independent, PX = i=1 PXi , as considered in Rd by [8]. This is already
a more general question than pairwise independence, since pairwise independence does not imply
total (mutual) independence, while the implication holds in the other direction. For example, if
X and Y are i.i.d. uniform on {?1, 1}, then (X, Y, XY ) is a pairwise independent but mutually
dependent triplet [9]. Tests of total and pairwise independence are insufficient, however, since they
do not rule out all third order factorizations of the joint distribution.
An important class of high order interactions occurs when the simultaneous effect of two variables on a third may not be additive. In particular, it may be possible that X ?
? Z and Y ?
? Z,
whereas ? ((X, Y ) ?
? Z) (for example, neither adding sugar to coffee nor stirring the coffee individually have an effect on its sweetness but the joint presence of the two does). In addition,
study of three-variable interactions can elucidate certain switching mechanisms between positive
and negative correlation of two genes expressions, as controlled by a third gene [10]. The presence
of such interactions is typically tested using some form of analysis of variance (ANOVA) model
which includes additional interaction terms, such as products of individual variables. Since each
such additional term requires a new hypothesis test, this increases the risk that some hypothesis test
will produce a false positive by chance. Therefore, a test that is able to directly detect the presence
of any kind of higher-order interaction would be of a broad interest in statistical modeling. In the
present work, we provide to our knowledge the first nonparametric test for three-variable interaction.
This work generalizes the HSIC test of pairwise independence, and has as its test statistic the norm
1
of an embedding of an appropriate signed measure to a reproducing kernel Hilbert space (RKHS).
When the statistic is non-zero, all third order factorizations can be ruled out. Moreover, this test is
applicable to the cases where X, Y and Z are themselves multivariate objects, and may take values
in non-Euclidean or structured domains.1
One important application of interaction measures is in learning structure for graphical models. If
the graphical model is assumed to be Gaussian, then second order interaction statistics may be used
to construct an undirected graph [11, 12]. When the interactions are non-Gaussian, however, other
approaches are brought to bear. An alternative approach to structure learning is to employ conditional independence tests. In the PC algorithm [13, 14, 15], a V-structure (a directed graphical model
with two independent parents pointing to a single child) is detected when an independence test between the parent variables accepts the null hypothesis, while a test of dependence of the parents
conditioned on the child rejects the null hypothesis. The PC algorithm gives a correct equivalence
class of structures subject to the causal Markov and faithfulness assumptions, in the absence of
hidden common causes. The original implementations of the PC algorithm rely on partial correlations for testing, and assume Gaussianity. A number of algorithms have since extended the basic
PC algorithm to arbitrary probability distributions over multivariate random variables [16, 17, 18],
by using nonparametric kernel independence tests [19] and conditional dependence tests [20, 18].
We observe that our Lancaster interaction based test provides a strong alternative to the conditional
dependence testing approach, and is seen to outperform earlier approaches in detecting cases where
independent parent variables weakly influence the child variable when considered individually, but
have a strong combined influence.
We begin our presentation in Section 2 with a definition of interaction measures, these being the
signed measures we will embed in an RKHS. We cover this embedding procedure in Section 3. We
then proceed in Section 4 to define pairwise and three way interactions. We describe a statistic to
test mutual independence for more than three variables, and provide a brief overview of the more
complex high-order interactions that may be observed when four or more variables are considered.
Finally, we provide experimental benchmarks in Section 5.
2 Interaction Measure
An interaction measure [21, 22] associated to a multidimensional probability distribution P of a random vector (X1 , . . . , XD ) taking values in the product space X1 ?? ? ??XD is a signed measure ?P
that vanishes whenever P can be factorised in a non-trivial way as a product of its (possibly multivariate) marginal distributions. For the cases D = 2, 3 the correct interaction measure coincides
with the the notion introduced by Lancaster [21] as a formal product
?L P
=
D
Y
i=1
?
? PXi ,
PX
i
(1)
QD? ?
signifies the joint probability distribution PXi1 ???Xi ? of a subvector
where each product j=1 PX
ij
D
Xi1 , . . . , XiD? . We will term the signed measure in (1) the Lancaster interaction measure. In the
case of a bivariate distribution, the Lancaster interaction measure is simply the difference between
the joint probability distribution and the product of the marginal distributions (the only possible
non-trivial factorization for D = 2), ?L P = PXY ? PX PY , while in the case D = 3, we obtain
?L P
=
PXY Z ? PXY PZ ? PY Z PX ? PXZ PY + 2PX PY PZ .
(2)
It is readily checked that
(X, Y ) ?
? Z ? (X, Z) ?
? Y ? (Y, Z) ?
?X
? ?L P = 0.
(3)
For D > 3, however, (1) does not capture all possible factorizations of the joint distribution, e.g.,
for D = 4, it need not vanish if (X1 , X2 ) ?
? (X3 , X4 ), but X1 and X2 are dependent and X3 and
X4 are dependent. Streitberg [22] corrected this definition using a more complicated construction
with the M?obius function on the lattice of partitions, which we describe in Section 4.3. In this
1
As the reader might imagine, the situation becomes more complex again when four or more variables
interact simultaneously; we provide a brief technical overview in Section 4.3.
2
work, however, we will focus on the case of three variables and formulate interaction tests based on
embedding of (2) into an RKHS.
The implication (3) states that the presence of Lancaster interaction rules out the possibility of any
factorization of the joint distribution, but the converse is not generally true; see Appendix C for details. In addition, it is important to note the distinction between the absence of Lancaster interaction
and the total (mutual) independence of (X, Y, Z), i.e., PXY Z = PX PY PZ . While total independence implies the absence of Lancaster interaction, the signed measure ?tot P = PXY Z ?PX PY PZ
associated to the total (mutual) independence of (X, Y, Z) does not vanish if, e.g., (X, Y ) ?
? Z, but
X and Y are dependent.
In this contribution, we construct the non-parametric test for the hypothesis ?L P = 0 (no Lancaster
interaction), as well as the non-parametric test for the hypothesis ?tot P = 0 (total independence),
based on the embeddings of the corresponding signed measures ?L P and ?tot P into an RKHS.
Both tests are particularly suited to the cases where X, Y and Z take values in a high-dimensional
space, and, moreover, they remain valid for a variety of non-Euclidean and structured domains, i.e.,
for all topological spaces where it is possible to construct a valid positive definite function; see [23]
for details. In the case of total independence testing, our approach can be viewed as a generalization
of the tests proposed in [24] based on the empirical characteristic functions.
3 Kernel Embeddings
We review the embedding of signed measures to a reproducing kernel Hilbert space. The RKHS
norms of such embeddings will then serve as our test statistics. Let Z be a topological space.
According to the Moore-Aronszajn theorem [25, p. 19], for every symmetric, positive definite
function (henceforth kernel) k : Z ? Z ? R, there is an associated reproducing kernel Hilbert
space (RKHS) Hk of real-valued functions on Z with reproducing kernel k. The map ? : Z ? Hk ,
? : z 7? k(?, z) is called the canonical feature map or the Aronszajn map of k. Denote by M(Z)
the Banach space of all finite signed Borel measures on Z. The notion of a feature map can then be
extended to kernel embeddings of elements of M(Z) [25, Chapter 4].
Definition 1. (Kernel embedding) Let k be a kernel
? on Z, and ? ? M(Z). The kernel embedding
of ? into the RKHS Hk is ?k (?) ? Hk such that f (z)d?(z) = hf, ?k (?)iHk for all f ? Hk .
?
Alternatively, the kernel embedding can be defined by the Bochner integral ?k (?) = k(?, z) d?(z).
If a measurable kernel k is a bounded function, it is straightforward to show using the Riesz representation theorem that ?k (?) exists for all ? ? M(Z).2 For many interesting bounded kernels k,
including the Gaussian, Laplacian and inverse multiquadratics, the embedding ?k : M(Z) ? Hk is
injective. Such kernels are said to be integrally strictly positive definite (ISPD) [26, p. 4]. A related
but weaker notion is that of a characteristic kernel [20, 27], which requires the kernel embedding
to be injective only on the set M1+ (Z) of probability measures. In the case that k is ISPD, since
Hk is a Hilbert space, we can introduce a notion of an inner product between two signed measures
?, ? ? ? M(Z),
?
?
?
hh?, ? iik := h?k (?), ?k (? )iHk = k(z, z ? )d?(z)d? ? (z ? ).
Since ?k is injective, this is a valid inner product and induces a norm on M(Z), for which
1/2
k?kk = hh?, ?iik = 0 if and only if ? = 0. This fact has been used extensively in the literature to
formulate: (a) a nonparametric two-sample test based on estimation of maximum mean discrepancy
i.i.d.
n
m i.i.d.
kP ? Qkk , for samples {Xi }i=1 ? P , {Yi }i=1 ? Q [28] and (b) a nonparametric indepenn
i.i.d.
dence test based on estimation of kPXY ? PX PY kk?l , for a joint sample {(Xi , Yi )}i=1 ? PXY
[19] (the latter is also called a Hilbert-Schmidt independence criterion), with kernel k ? l on the
product space defined as k(x, x? )l(y, y ? ). When a bounded characteristic kernel is used, the above
tests are consistent against all alternatives, and their alternative interpretation is as a generalization
[29, 3] of energy distance [30, 31] and distance covariance [2, 32].
2
Unbounded kernels can also be considered, however [3]. In this case, one can still study embeddings
1/2
1/2
of
the
signed measures Mk (Z) ? M(Z),
which satisfy a finite moment condition, i.e., Mk (Z) =
n
o
? 1/2
? ? M(Z) : k (z, z) d|?|(z) < ? .
3
Table 1: V -statistic estimates of hh?, ? ? iik?l in the two-variable case
?\? ?
PXY
PX PY
PXY
1
n2
PX PY
(K ? L)++
1
(KL)++
n3
1
K++ L++
n4
In this article, we extend this approach to the three-variable case, and formulate tests for both
the Lancaster interaction and for the total independence, using simple consistent estimators of
k?L P kk?l?m and k?tot P kk?l?m respectively, which we describe in the next Section. Using the
same arguments as in the tests of [28, 19], these tests are also consistent against all alternatives as
long as ISPD kernels are used.
4 Interaction Tests
Notational remarks: Throughout the paper, ? denotes an Hadamard (entrywise) product. Let A be
an n ? n matrix, and K a symmetric n ? n matrix.
Pn We will fix the following notational conventions:
1 denotes an n ? 1 column
of
ones;
A
=
+j
i=1 Aij denotes the sum of all elements of the j-th
Pn
column of A; Ai+ =
A
denotes
the
sum
of all elements of the i-th row of A; A++ =
ij
j=1
Pn Pn
A
denotes
the
sum
of
all
elements
of
A; K+ = 11? K, i.e., [K+ ]ij = K+j = Kj+ ,
ij
i=1
?j=1
and K+ ij = Ki+ = K+i .
4.1 Two-Variable (Independence) Test
We provide a short overview of the kernel independence test of [19], which we write as the RKHS
norm of the embedding of a signed measure. While this material is not new (it appears in [28, Section
7.4]), it will help define how to proceed when a third variable is introduced, and the signed measures
2
become more involved. We begin by expanding the squared RKHS norm kPXY ? PX PY kk?l as
inner products, and applying the reproducing property,
2
kPXY ? PX PY kk?l
=
EXY EX ? Y ? k(X, X ? )l(Y, Y ? ) + EX EX ? k(X, X ? )EY EY ? l(Y, Y ? )
? 2EX ? Y ? [EX k(X, X ? )EY l(Y, Y ? )] ,
(4)
where (X, Y ) and (X ? , Y ? ) are independent copies of random variables on X ? Y with distribution
PXY .
n
i.i.d.
2
Given a joint sample {(Xi , Yi )}i=1 ? PXY , an empirical estimator of kPXY ? PX PY kk?l is
obtained by substituting corresponding empirical means into (4), which can be represented using
Gram matrices K and L (Kij = k(Xi , Xj ), Lij = l(Yi , Yj )),
n
n X
X
1
? X ? Y ? k(X, X ? )l(Y, Y ? ) = 1
? XY E
Kab Lab = 2 (K ? L)++ ,
E
n2 a=1
n
b=1
n X
n X
n X
n
X
1
? Y ? l(Y, Y ? ) = 1
?Y E
? X ? k(X, X ? )E
?X E
E
Kab Lcd = 4 K++ L++ ,
4
n a=1
n
c=1
b=1
d=1
n X
n X
n
h
i
X
1
? X k(X, X ? )E
? Y l(Y, Y ? ) = 1
? X?Y ? E
Kac Lbc = 3 (KL)++ .
E
n3 a=1
n
c=1
b=1
Since these are V-statistics [33, Ch. 5], there is a bias of OP (n?1 ); U-statistics may be used if an
unbiased estimate is needed. Each of the terms above corresponds to an estimate of an inner product
hh?, ? ? iik?l for probability measures ? and ? ? taking values in {PXY , PX PY }, as summarized in
Table 1. Even though the second and third terms involve triple and quadruple sums, each of the
empirical means can be computed using sums of all terms of certain matrices, where the dominant
computational cost is in computing the matrix product KL. In fact, the overall estimator can be
4
Table 2: V -statistic estimates of hh?, ? ? iik?l?m in the three-variable case
?\? ?
nPXY Z
nPXY Z
(K ? L ? M )++
n2 PXY PZ
n2 PXY PZ
n2 PXZ PY
n2 PY Z PX
n3 PX PY PZ
((K ? L) M )++
((K ? M ) L)++
((M ? L) K)++
tr(K+ ? L+ ? M+ )
(K ? L)++ M++
(M KL)++
(KLM )++
(KL)++ M++
(K ? M )++ L++
(KM L)++
(KM )++ L++
n2 PXZ PY
n2 PY Z PX
(L ? M )++ K++
n3 PX PY PZ
(LM )++ K++
K++ L++ M++
2
computed in an even simpler form (see Proposition 9 in Appendix F), as
P?XY ? P?X P?Y
1
n2
k?l
1
?
n 11
=
is the centering matrix. Note that by the idempotence of
(K ? HLH)++ , where H = I ?
H, we also have that (K ? HLH)++ = (HKH ? HLH)++ . In the rest of the paper, for any Gram
? When three variables are
matrix K, we will denote its corresponding centered matrix HKH by K.
present, a two-variable test already allows us to determine whether for instance (X, Y ) ?
? Z, i.e.,
whether PXY Z = PXY PZ . It is sufficient to treat (X, Y ) as a single variable on the product space
X ? Y, with the product kernel k ? l. Then,
the Grammatrix associated to (X, Y ) is simply K ? L,
?
.3 What is not obvious, however, is if a
and the corresponding V -statistic is n12 K ? L ? M
++
V-statistic for the Lancaster interaction (which can be thought of as a surrogate for the composite
hypothesis of various factorizations) can be obtained in a similar form. We will address this question
in the next section.
4.2 Three-Variable Tests
As in the two-variable case, it suffices to derive V-statistics for inner products hh?, ? ? iik?l?m , where
? and ? ? take values in all possible combinations of the joint and the products of the marginals, i.e.,
PXY Z , PXY PZ , etc. Again, it is easy to see that these can be expressed as certain expectations of
kernel functions, and thereby can be calculated by an appropriate manipulation of the three Gram
matrices. We summarize the resulting expressions in Table 2 - their derivation is a tedious but
straightforward linear algebra exercise. For compactness, the appropriate normalizing terms are
moved inside the measures considered.
Based on the individual RKHS inner product estimators, we can now easily derive estimators for
various signed measures arising as linear combinations of PXY Z , PXY PZ , and so on. The first such
measure is an ?incomplete? Lancaster interaction measure ?(Z) P = PXY Z +PX PY PZ ?PY Z PX ?
PXZ PY , which vanishes if (Y, Z) ?
? X or (X, Z) ?
? Y , but not necessarily if (X, Y ) ?
? Z. We
obtain the following result for the empirical measure P? .
2
? ?L
??M
Proposition 2 (Incomplete Lancaster interaction).
?(Z) P?
= n12 K
.
k?l?m
++
Analogous expressions hold for ?(X) P? and ?(Y ) P? . Unlike in the two-variable case where either
matrix or both can be centered, centering of each matrix in the three-variable case has a different
meaning. In particular, one requires centering of all three kernel matrices to perform a ?complete?
Lancaster interaction test, as given by the following Proposition.
2
? ?L
??M
?
.
Proposition 3 (Lancaster interaction).
?L P?
= n12 K
k?l?m
++
The proofs of these Propositions are given in Appendix A. We summarize various hypotheses and
the associated V-statistics in the Appendix B. As we will demonstrate in the experiments in Section
5, while particularly useful for testing the factorization hypothesis, i.e., for (X, Y ) ?
? Z ? (X, Z) ?
?
2
Y ? (Y, Z) ?
? X, the statistic
?L P?
can also be used for powerful tests of either the
k?l?m
individual hypotheses (Y, Z) ?
? X, (X, Z) ?
? Y , or (X, Y ) ?
? Z, or for total independence testing,
3
In general, however, this approach would require some care since, e.g., X and Y could be measured on
very different scales, and the choice of kernels k and l needs to take this into account.
5
i.e., PXY Z = PX PY PZ , as it vanishes in all of these cases. The null distribution under each of these
hypotheses can be estimated using a standard permutation-based approach described in Appendix
D.
Another way to obtain the Lancaster interaction statistic is as the RKHS norm of the joint ?central moment? ?XY Z = EXY Z [(kX ? ?X ) ? (lY ? ?Y ) ? (mZ ? ?Z )] of RKHS-valued random
variables kX , lY and mZ (understood as an element of the tensor RKHS Hk ? Hl ? Hm ). This is
related to a classical characterization of the Lancaster interaction [21, Ch. XII]: there is no Lancaster
interaction between X, Y and Z if and only if cov [f (X), g(Y ), h(Z)] = 0 for all L2 functions f , g
and h. There is an analogous result in our case (proof is given in Appendix A), which states
Proposition 4. k?L P kk?l?m = 0 if and only if cov [f (X), g(Y ), h(Z)] = 0 for all f ? Hk ,
g ? Hl , h ? Hm .
And finally, we give an estimator of the RKHS norm of the total independence measure ?tot P .
Proposition 5 (Total independence). Let ?tot P? = P?XY Z ? P?X P?Y P?Z . Then:
2
1
2
1
=
(K ? L ? M )++ ? 4 tr(K+ ? L+ ? M+ ) + 6 K++ L++ M++ .
?tot P?
n2
n
n
k?l?m
The proof follows simply from reading off the corresponding inner-product V-statistics from the
Table 2. While the test statistic for total independence has a somewhat more complicated form than
that of Lancaster interaction, it can also be computed in quadratic time.
4.3 Interaction for D > 3
Streitberg?s correction of the interaction measure for D > 3 has the form
X
?S P =
(?1)|?|?1 (|?| ? 1)!J? P,
(5)
?
where the sum is taken over all partitions of the set {1, 2, . . . , n}, |?| denotes the size of the partition
(number of blocks), and J? : P 7? P? is the partition operator on probability measures, which for
aQfixed partition ? = ?1 |?2 | . . . |?r maps the probability measure P to the product measure P? =
r
j=1 P?j , where P?j is the marginal distribution of the subvector (Xi : i ? ?j ) . The coefficients
correspond to the M?obius inversion on the partition lattice [34]. While the Lancaster interaction
has an interpretation in terms of joint central moments, Streitberg?s correction corresponds
to joint
(1)
cumulants [22, Section 4]. Therefore, a central moment expression like EX1 ...Xn [ kX1 ? ?X1 ?
(n)
? ? ? ? kXn ? ?Xn ] does not capture the correct notion of the interaction measure. Thus, while
one can in principle construct RKHS embeddings of higher-order interaction measures, and compute
RKHS norms using a calculus of V -statistics and Gram-matrices analogous to that of Table 2, it does
not seem possible to avoid summing over all partitions when computing the corresponding statistics,
yielding a computationally prohibitive approach in general. This can be viewed by analogy with the
scalar case, where it is well known that the second and third cumulants coincide with the second
and third central moments, whereas the higher order cumulants are neither moments nor central
moments, but some other polynomials of the moments.
4.4 Total independence for D > 3
In general, the test statistic for total independence in the D-variable case is
2
D
n
n D
n D n
Y
1 X X Y (i)
2 X Y X (i)
?
?
P
=
K
?
?
P
Kab
X1:D
Xi
ab
ND (i)
n2 a=1
nD+1 a=1 i=1
i=1
i=1
i=1
b=1
k
+
D n
n
1 YXX
n2D
b=1
(i)
Kab .
i=1 a=1 b=1
A similar statistic for total independence is discussed by [24] where testing of total independence
based on empirical characteristic functions is considered. Our test has a direct interpretation in terms
of characteristic functions as well, which is straightforward to see in the case of translation invariant
kernels on Euclidean spaces, using their Bochner representation, similarly as in [27, Corollary 4].
6
Marginal independence tests: Dataset B
Null acceptance rate (Type II error)
Marginal independence tests: Dataset A
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
2var: X ?
?Y
2var: X ?
?Z
2var: (X, Y ) ?
?Z
?L : (X, Y ) ?
?Z
0
0
1
3
5
7
9
11
13
15
17
19
1
3
5
7
9
11
13
15
17
19
Dimension
Dimension
Figure 1: Two-variable kernel independence tests and the test for (X, Y ) ?
? Z using the Lancaster
statistic
Null acceptance rate (Type II error)
Total independence test: Dataset A
Total independence test: Dataset B
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
?L : total indep.
?tot : total indep.
1
3
5
7
9
11
13
15
17
19
3
1
5
Dimension
9
7
11
13
15
17
19
Dimension
Figure 2: Total independence: ?tot P? vs. ?L P? .
5 Experiments
We investigate the performance of various permutation based tests that use the Lancaster statistic
2
2
and the total independence statistic
?tot P?
on two synthetic datasets where
?L P?
k?l?m
k?l?m
X, Y and Z are random vectors of increasing dimensionality:
Dataset A: Pairwise independent, mutually dependent data. Our first dataset is a triplet of
i.i.d.
random vectors (X, Y, Z) on Rp ? Rp ? Rp , with X, Y ? N (0, Ip ), W ? Exp( ?12 ),
Z1 = sign(X1 Y1 )W , and Z2:p ? N (0, Ip?1 ), i.e., the product of X1 Y1 determines the sign of
Z1 , while the remaining p ? 1 dimensions are independent (and serve as noise in this example).4
In this case, (X, Y, Z) is clearly a pairwise independent but mutually dependent triplet. The mutual
dependence becomes increasingly difficult to detect as the dimensionality p increases.
Dataset B: Joint dependence can be easier to detect. In this example, we consider a triplet of
i.i.d.
random vectors (X, Y, Z) on Rp ? Rp ? Rp , with X, Y ? N (0, Ip ), Z2:p ? N (0, Ip?1 ), and
?
2
?
w.p. 1/3,
?X1 + ?,
2
Z1 = Y1 + ?,
w.p. 1/3,
?
?X1 Y1 + ?, w.p. 1/3,
where ? ? N (0, 0.12 ). Thus, dependence of Z on pair (X, Y ) is stronger than on X and Y individually.
4
Note that there is no reason for X, Y and Z to have the same dimensionality p - this is done for simplicity
of exposition.
7
V-structure discovery: Dataset B
Null acceptance rate (Type II error)
V-structure discovery: Dataset A
1
1
0.8
0.8
0.6
0.6
0.4
0.4
2var: Factor
?L : Factor
0.2
0.2
CI: X ?
? Y |Z
0
0
1
3
5
7
9
11
13
15
17
19
1
3
5
7
9
11
13
15
17
19
Dimension
Dimension
Figure 3: Factorization hypothesis: Lancaster statistic vs. a two-variable based test; Test for X ?
?
Y |Z from [18]
In all cases, we use permutation tests as described in Appendix D. The test level is set to ? = 0.05,
sample size to n = 500, and we use gaussian kernels with bandwidth set to the interpoint median
distance. In Figure 1, we plot the null hypothesis acceptance rates of the standard kernel twovariable tests for X ?
? Y (which is true for both datasets A and B, and accepted at the correct
rate across all dimensions) and for X ?
? Z (which is true only for dataset A), as well as of the
standard kernel two-variable test for (X, Y ) ?
? Z, and the test for (X, Y ) ?
? Z using the Lancaster
statistic. As expected, in dataset B, we see that dependence of Z on pair (X, Y ) is somewhat easier
to detect than on X individually with two-variable tests. In both datasets, however, the Lancaster
interaction appears significantly more sensitive in detecting this dependence as dimensionality p
2
increases. Figure 2 plots the Type II error of total independence tests with statistics
?L P?
k?l?m
2
?
and
?tot P
. The Lancaster statistic outperforms the total independence statistic everywhere
k?l?m
apart from the Dataset B when the number of dimensions is small (between 1 and 5). Figure 3 plots
the Type II error of the factorization test, i.e., test for (X, Y ) ?
? Z ? (X, Z) ?
? Y ? (Y, Z) ?
?X
with Lancaster statistic with Holm-Bonferroni correction as described in Appendix D, as well as
the two-variable based test (which performs three standard two-variable tests and applies the HolmBonferroni correction). We also plot the Type II error for the conditional independence test for
X ?
? Y |Z from [18]. Under assumption that X ?
? Y (correct on both datasets), negation of each
of these three hypotheses is equivalent to the presence of V-structure X ? Z ? Y , so the rejection
of the null can be viewed as a V-structure detection procedure. As dimensionality increases, the
Lancaster statistic appears significantly more sensitive to the interactions present than the competing
approaches, which is particularly pronounced in Dataset A.
6 Conclusions
We have constructed permutation-based nonparametric tests for three-variable interactions, including the Lancaster interaction and total independence. The tests can be used in datasets where only
higher-order interactions persist, i.e., variables are pairwise independent; as well as in cases where
joint dependence may be easier to detect than pairwise dependence, for instance when the effect of
two variables on a third is not additive. The flexibility of the framework of RKHS embeddings of
signed measures allows us to consider variables that are themselves multidimensional. While the total independence case readily generalizes to more than three dimensions, the combinatorial nature of
joint cumulants implies that detecting interactions of higher order requires significantly more costly
computation.
References
[1] A. Gretton, O. Bousquet, A. Smola, and B. Sch?olkopf. Measuring statistical dependence with HilbertSchmidt norms. In ALT, pages 63?78, 2005.
[2] G. Sz?ekely, M. Rizzo, and N.K. Bakirov. Measuring and testing dependence by correlation of distances.
Ann. Stat., 35(6):2769?2794, 2007.
8
[3] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and
RKHS-based statistics in hypothesis testing. Ann. Stat., 41(5):2263?2291, 2013.
[4] F. R. Bach and M. I. Jordan. Kernel independent component analysis. J. Mach. Learn. Res., 3:1?48, 2002.
[5] K. Fukumizu, F. Bach, and A. Gretton. Statistical consistency of kernel canonical correlation analysis. J.
Mach. Learn. Res., 8:361?383, 2007.
[6] J. Dauxois and G. M. Nkiet. Nonlinear canonical analysis and independence tests. Ann. Stat., 26(4):1254?
1278, 1998.
[7] D. Pal, B. Poczos, and Cs. Szepesvari. Estimation of renyi entropy and mutual information based on
generalized nearest-neighbor graphs. In NIPS 23, 2010.
[8] A. Kankainen. Consistent Testing of Total Independence Based on the Empirical Characteristic Function.
PhD thesis, University of Jyv?askyl?a, 1995.
[9] S. Bernstein. The Theory of Probabilities. Gastehizdat Publishing House, Moscow, 1946.
[10] M. Kayano, I. Takigawa, M. Shiga, K. Tsuda, and H. Mamitsuka. Efficiently finding genome-wide threeway gene interactions from transcript- and genotype-data. Bioinformatics, 25(21):2735?2743, 2009.
[11] N. Meinshausen and P. Buhlmann. High dimensional graphs and variable selection with the lasso. Ann.
Stat., 34(3):1436?1462, 2006.
[12] P. Ravikumar, M.J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by
minimizing ?1 -penalized log-determinant divergence. Electron. J. Stat., 4:935?980, 2011.
[13] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2001.
[14] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. 2nd edition, 2000.
[15] M. Kalisch and P. Buhlmann. Estimating high-dimensional directed acyclic graphs with the PC algorithm.
J. Mach. Learn. Res., 8:613?636, 2007.
[16] X. Sun, D. Janzing, B. Sch?olkopf, and K. Fukumizu. A kernel-based causal learning algorithm. In ICML,
pages 855?862, 2007.
[17] R. Tillman, A. Gretton, and P. Spirtes. Nonlinear directed acyclic structure learning with weakly additive
noise models. In NIPS 22, 2009.
[18] K. Zhang, J. Peters, D. Janzing, and B. Schoelkopf. Kernel-based conditional independence test and
application in causal discovery. In UAI, pages 804?813, 2011.
[19] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. In NIPS 20, pages 585?592, Cambridge, MA, 2008. MIT Press.
[20] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In
NIPS 20, pages 489?496, 2008.
[21] H.O. Lancaster. The Chi-Squared Distribution. Wiley, London, 1969.
[22] B. Streitberg. Lancaster interactions revisited. Ann. Stat., 18(4):1878?1885, 1990.
[23] K. Fukumizu, B. Sriperumbudur, A. Gretton, and B. Schoelkopf. Characteristic kernels on groups and
semigroups. In NIPS 21, pages 473?480, 2009.
[24] A. Kankainen. Consistent Testing of Total Independence Based on the Empirical Characteristic Function.
PhD thesis, University of Jyv?askyl?a, 1995.
[25] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Kluwer, 2004.
[26] B. Sriperumbudur, K. Fukumizu, and G. Lanckriet. Universality, characteristic kernels and rkhs embedding of measures. J. Mach. Learn. Res., 12:2389?2410, 2011.
[27] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Hilbert space embeddings
and metrics on probability measures. J. Mach. Learn. Res., 11:1517?1561, 2010.
[28] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel two-sample test. J. Mach.
Learn. Res., 13:723?773, 2012.
[29] D. Sejdinovic, A. Gretton, B. Sriperumbudur, and K. Fukumizu. Hypothesis testing using pairwise distances and associated kernels. In ICML, 2012.
[30] G. Sz?ekely and M. Rizzo. Testing for equal distributions in high dimension. InterStat, (5), November
2004.
[31] L. Baringhaus and C. Franz. On a new multivariate two-sample test. J. Multivariate Anal., 88(1):190?206,
2004.
[32] G. Sz?ekely and M. Rizzo. Brownian distance covariance. Ann. Appl. Stat., 4(3):1233?1303, 2009.
[33] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980.
[34] T.P. Speed. Cumulants and partition lattices. Austral. J. Statist., 25:378?388, 1983.
[35] S. Holm. A simple sequentially rejective multiple test procedure. Scand. J. Statist., 6(2):65?70, 1979.
[36] A. Gretton, K. Fukumizu, Z. Harchaoui, and B. Sriperumbudur. A fast, consistent kernel two-sample test.
In NIPS 22, Red Hook, NY, 2009. Curran Associates Inc.
9
| 4893 |@word determinant:1 inversion:1 polynomial:1 norm:9 stronger:1 nd:3 tedious:1 km:2 calculus:1 covariance:4 thereby:1 tr:2 moment:8 yxx:1 rkhs:19 jyv:2 outperforms:2 com:1 z2:2 exy:2 gmail:1 universality:1 readily:2 tot:11 additive:3 partition:8 plot:4 v:2 prohibitive:1 tillman:1 short:1 detecting:4 provides:1 characterization:1 revisited:1 simpler:1 zhang:1 unbounded:1 mathematical:1 constructed:1 direct:1 become:2 inside:1 introduce:2 pairwise:12 expected:1 themselves:2 nor:2 chi:1 increasing:1 becomes:2 begin:2 estimating:1 moreover:2 bounded:3 null:8 what:1 kind:1 finding:2 every:1 multidimensional:2 xd:2 uk:3 berlinet:1 unit:1 converse:1 ly:2 kalisch:1 positive:5 understood:1 treat:1 switching:1 mach:6 quadruple:1 signed:15 might:2 equivalence:2 meinshausen:1 appl:1 factorization:9 directed:4 testing:14 yj:1 block:1 definite:3 x3:2 procedure:3 area:1 empirical:8 significantly:4 reject:1 thought:1 composite:1 selection:1 operator:1 risk:1 influence:4 applying:1 py:23 measurable:1 map:5 equivalent:1 straightforward:4 formulate:3 simplicity:1 rule:2 estimator:6 embedding:11 notion:5 n12:3 hsic:2 analogous:3 elucidate:1 construction:1 imagine:1 curran:1 hypothesis:16 lanckriet:2 associate:1 element:5 particularly:3 persist:1 observed:1 capture:2 schoelkopf:2 sun:2 indep:2 mz:2 vanishes:3 sugar:1 lcd:1 weakly:2 algebra:1 serve:2 easily:1 joint:15 chapter:1 integrally:1 represented:1 various:4 derivation:1 fast:1 describe:3 london:1 kp:1 detected:1 lancaster:33 widely:1 valued:2 statistic:37 cov:2 ip:4 ucl:1 interaction:53 product:21 hadamard:1 baringhaus:1 flexibility:1 kx1:1 moved:1 pronounced:1 olkopf:6 parent:4 produce:1 object:1 help:1 derive:2 ac:1 stat:7 measured:1 nearest:1 ij:5 op:1 transcript:1 strong:3 c:1 implies:2 riesz:1 qd:2 convention:1 direction:1 rasch:1 correct:5 centered:2 material:1 xid:1 require:1 fix:1 generalization:2 suffices:1 proposition:7 strictly:1 correction:4 hold:2 considered:6 exp:1 lm:1 pointing:1 substituting:1 electron:1 estimation:4 applicable:1 combinatorial:1 sensitive:3 individually:5 teo:1 fukumizu:10 brought:1 clearly:1 mit:1 gaussian:4 pn:4 avoid:1 corollary:1 focus:2 pxi:2 notational:2 hk:9 detect:5 ihk:2 inference:1 dependent:7 typically:1 shiga:1 compactness:1 hidden:1 overall:1 mutual:7 marginal:5 equal:1 construct:4 x4:2 broad:1 yu:1 icml:2 discrepancy:1 employ:1 causation:1 simultaneously:1 divergence:1 individual:3 semigroups:1 negation:1 ab:1 detection:1 interest:1 acceptance:4 possibility:1 investigate:1 genotype:1 yielding:1 pc:5 implication:2 n2d:1 integral:1 partial:1 arthur:2 xy:5 injective:3 incomplete:2 euclidean:3 ruled:1 re:6 causal:3 tsuda:1 mk:2 instance:3 column:2 modeling:1 earlier:1 kij:1 cover:1 cumulants:5 measuring:3 lattice:3 signifies:1 cost:1 kankainen:2 uniform:1 klm:1 pal:1 synthetic:1 combined:2 borgwardt:1 xi1:1 off:1 again:2 squared:2 central:5 thesis:2 possibly:1 henceforth:1 account:1 factorised:1 summarized:1 includes:1 gaussianity:1 coefficient:1 inc:1 satisfy:1 lab:1 red:1 hf:1 complicated:2 wicher:1 contribution:1 variance:1 characteristic:9 efficiently:1 correspond:1 weak:1 simultaneous:1 janzing:2 whenever:1 checked:1 definition:3 centering:3 against:3 sriperumbudur:6 energy:1 involved:2 obvious:1 associated:6 proof:3 dataset:13 ask:1 knowledge:1 dimensionality:5 hilbert:9 appears:3 higher:5 entrywise:1 done:1 though:1 smola:3 correlation:5 aronszajn:2 nonlinear:2 ekely:3 pxz:4 hlh:3 kab:4 effect:4 true:3 unbiased:1 kxn:1 symmetric:2 moore:1 spirtes:2 ex1:1 bonferroni:1 coincides:1 criterion:2 generalized:1 complete:1 demonstrate:1 performs:1 reasoning:1 lse:2 meaning:1 common:1 raskutti:1 overview:3 obius:2 banach:1 extend:1 interpretation:3 m1:1 discussed:1 marginals:1 kluwer:1 cambridge:2 ai:1 rd:1 consistency:1 similarly:1 dino:2 austral:1 etc:1 lbc:1 dominant:1 bergsma:2 multivariate:5 brownian:1 apart:1 manipulation:1 certain:3 yi:4 seen:1 additional:2 care:1 somewhat:2 ey:3 bochner:2 determine:1 ii:6 multiple:1 askyl:2 harchaoui:1 gretton:13 technical:1 bach:2 long:1 ravikumar:1 controlled:1 laplacian:1 prediction:1 basic:1 dauxois:1 expectation:1 metric:1 sejdinovic:4 kernel:48 whereas:2 addition:2 median:1 sch:6 rest:1 unlike:1 interstat:1 subject:1 undirected:1 rizzo:3 seem:1 jordan:1 presence:5 bernstein:1 embeddings:9 easy:1 variety:1 independence:45 xj:1 competing:2 bandwidth:1 lasso:1 inner:7 csml:1 whether:3 expression:4 song:1 peter:1 poczos:1 proceed:2 cause:2 york:1 remark:1 generally:1 useful:1 involve:1 nonparametric:8 extensively:1 statist:2 induces:1 simplest:1 outperform:1 canonical:4 kac:1 sign:2 estimated:1 arising:1 xii:1 write:1 group:1 four:2 neither:2 anova:1 graph:4 sum:6 inverse:1 everywhere:1 powerful:2 family:1 reader:1 throughout:1 gastehizdat:1 appendix:8 ki:1 pxy:21 bakirov:1 topological:2 quadratic:1 hilbertschmidt:1 x2:2 n3:4 dence:1 bousquet:1 speed:1 argument:1 px:22 glymour:1 department:1 structured:2 according:1 combination:2 remain:1 across:1 increasingly:1 serfling:1 n4:1 hl:2 invariant:1 taken:1 computationally:1 scheines:1 mutually:4 mechanism:1 hh:6 needed:1 generalizes:2 observe:1 appropriate:3 alternative:6 schmidt:2 rp:6 original:1 thomas:1 denotes:6 remaining:1 moscow:1 publishing:1 graphical:4 especially:1 coffee:2 classical:1 tensor:1 question:3 already:2 occurs:1 parametric:2 costly:1 dependence:13 surrogate:1 said:1 distance:8 trivial:2 reason:1 holm:2 scand:1 insufficient:1 kk:8 minimizing:1 difficult:1 negative:1 implementation:1 anal:1 threeway:1 perform:1 markov:1 datasets:5 benchmark:1 finite:2 november:1 situation:1 extended:2 y1:4 reproducing:8 arbitrary:1 buhlmann:2 introduced:2 pair:2 subvector:2 kl:5 z1:3 faithfulness:1 rejective:1 accepts:1 distinction:1 pearl:1 nip:6 address:1 able:1 agnan:1 reading:1 summarize:2 including:2 wainwright:1 treated:1 rely:1 brief:2 imply:1 hook:1 hm:2 lij:1 kj:1 review:1 literature:2 l2:1 discovery:3 bear:1 permutation:4 interesting:1 analogy:1 var:4 acyclic:2 triple:1 sufficient:1 consistent:7 article:1 principle:1 translation:1 row:1 penalized:1 copy:1 aij:1 formal:1 weaker:1 bias:1 neighbor:1 wide:1 taking:2 calculated:1 xn:2 valid:3 gram:5 dimension:11 genome:1 coincide:1 franz:1 hkh:2 gene:3 sz:3 sequentially:1 uai:1 twovariable:1 summing:1 assumed:1 xi:7 alternatively:1 search:1 triplet:4 table:6 nature:1 learn:6 szepesvari:1 expanding:1 interact:2 complex:2 necessarily:1 domain:2 noise:2 edition:1 n2:11 child:3 x1:10 causality:1 borel:1 gatsby:1 ny:1 wiley:2 exercise:1 house:1 vanish:2 third:10 renyi:1 theorem:3 embed:1 pz:13 alt:1 normalizing:1 bivariate:1 exists:1 false:1 adding:1 ci:1 phd:2 conditioned:1 kx:2 easier:3 rejection:1 suited:2 entropy:1 simply:3 iik:6 expressed:1 scalar:1 applies:1 ch:2 corresponds:2 chance:1 determines:1 ma:1 conditional:6 viewed:3 presentation:1 ann:6 exposition:1 absence:3 corrected:1 total:29 called:2 mamitsuka:1 accepted:1 experimental:1 latter:1 interpoint:1 bioinformatics:1 tested:1 ex:5 |
4,302 | 4,894 | More Effective Distributed ML via a Stale
Synchronous Parallel Parameter Server
?Qirong Ho, ?James Cipar, ?Henggang Cui, ?Jin Kyu Kim, ?Seunghak Lee,
?Phillip B. Gibbons, ?Garth A. Gibson, ?Gregory R. Ganger, ?Eric P. Xing
?School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
?Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
qho@, jcipar@, jinkyuk@,
hengganc@, [email protected]
?Intel Labs
Pittsburgh, PA 15213
[email protected]
seunghak@, garth@,
[email protected]
Abstract
We propose a parameter server system for distributed ML, which follows a Stale
Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared
interface for read/write access to an ML model?s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions
of these values from a local cache, instead of waiting to get them from a central
storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm
correctness by limiting the maximum age of the stale values. We provide a proof
of correctness under SSP, as well as empirical results demonstrating that the SSP
model achieves faster algorithm convergence on several different ML problems,
compared to fully-synchronous and asynchronous schemes.
1
Introduction
Modern applications awaiting next generation machine intelligence systems have posed unprecedented scalability challenges. These scalability needs arise from at least two aspects: 1) massive
data volume, such as societal-scale social graphs [10, 25] with up to hundreds of millions of nodes;
and 2) massive model size, such as the Google Brain deep neural network [9] containing billions of
parameters. Although there exist means and theories to support reductionist approaches like subsampling data or using small models, there is an imperative need for sound and effective distributed ML
methodologies for users who cannot be well-served by such shortcuts. Recent efforts towards distributed ML have made significant advancements in two directions: (1) Leveraging existing common
but simple distributed systems to implement parallel versions of a limited selection of ML models,
that can be shown to have strong theoretical guarantees under parallelization schemes such as cyclic
delay [17, 1], model pre-partitioning [12], lock-free updates [21], bulk synchronous parallel [5], or
even no synchronization [28] ? these schemes are simple to implement but may under-exploit the
full computing power of a distributed cluster. (2) Building high-throughput distributed ML architectures or algorithm implementations that feature significant systems contributions but relatively less
theoretical analysis, such as GraphLab [18], Spark [27], Pregel [19], and YahooLDA [2].
While the aforementioned works are significant contributions in their own right, a naturally desirable
goal for distributed ML is to pursue a system that (1) can maximally unleash the combined computational power in a cluster of any given size (by spending more time doing useful computation and
less time waiting for communication), (2) supports inference for a broad collection of ML methods,
and (3) enjoys correctness guarantees. In this paper, we explore a path to such a system using the
1
idea of a parameter server [22, 2], which we define as the combination of a shared key-value store
that provides a centralized storage model (which may be implemented in a distributed fashion) with
a synchronization model for reading/updating model values. The key-value store provides easyto-program read/write access to shared parameters needed by all workers, and the synchronization
model maximizes the time each worker spends on useful computation (versus communication with
the server) while still providing algorithm correctness guarantees.
Towards this end, we propose a parameter server using a Stale Synchronous Parallel (SSP) model of
computation, for distributed ML algorithms that are parallelized into many computational workers
(technically, threads) spread over many machines. In SSP, workers can make updates to a parameter1 ?, where the updates follow an associative, commutative form ?
? + . Hence, the current
true value of ? is just the sum over updates from all workers. When a worker asks for ?, the SSP
model will give it a stale (i.e. delayed) version of ? that excludes recent updates . More formally,
a worker reading ? at iteration c will see the effects of all from iteration 0 to c s 1, where
s
0 is a user-controlled staleness threshold. In addition, the worker may get to see some recent
updates beyond iteration c s 1. The idea is that SSP systems should deliver as many updates
as possible, without missing any updates older than a given age ? a concept referred to as bounded
staleness [24]. The practical effect of this is twofold: (1) workers can perform more computation
instead of waiting for other workers to finish, and (2) workers spend less time communicating with
the parameter server, and more time doing useful computation. Bounded staleness distinguishes
SSP from cyclic-delay systems [17, 1] (where ? is read with inflexible staleness), Bulk Synchronous
Parallel (BSP) systems like Hadoop (workers must wait for each other at the end of every iteration),
or completely asynchronous systems [2] (workers never wait, but ? has no staleness guarantees).
We implement an SSP parameter server with a table-based interface, called SSPtable, that supports
a wide range of distributed ML algorithms for many models and applications. SSPtable itself can
also be run in a distributed fashion, in order to (a) increase performance, or (b) support applications
where the parameters ? are too large to fit on one machine. Moreover, SSPtable takes advantage of
bounded staleness to maximize ML algorithm performance, by reading the parameters ? from caches
on the worker machines whenever possible, and only reading ? from the parameter server when the
SSP model requires it. Thus, workers (1) spend less time waiting for each other, and (2) spend less
time communicating with the parameter server. Furthermore, we show that SSPtable (3) helps slow,
straggling workers to catch up, providing a systems-based solution to the ?last reducer? problem on
systems like Hadoop (while we note that theory-based solutions are also possible). SSPtable can
be run on multiple server machines (called ?shards?), thus dividing its workload over the cluster;
in this manner, SSPtable can (4) service more workers simultaneously, and (5) support very large
models that cannot fit on a single machine. Finally, the SSPtable server program can also be run on
worker machines, which (6) provides a simple but effective strategy for allocating machines between
workers and the parameter server.
Our theoretical analysis shows that (1) SSP generalizes the bulk synchronous parallel (BSP) model,
and that (2) stochastic gradient algorithms (e.g. for matrix factorization or topic models) under SSP
not only converge, but do so at least as fast as cyclic-delay systems [17, 1] (and potentially even
faster depending on implementation). Furthermore, our implementation of SSP, SSPtable, supports
a wide variety of algortihms and models, and we demonstrate it on several popular ones: (a) Matrix Factorization with stochastic gradient descent [12], (b) Topic Modeling with collapsed Gibbs
sampling [2], and (c) Lasso regression with parallelized coordinate descent [5]. Our experimental
results show that, for these 3 models and algorithms, (i) SSP yields faster convergence than BSP (up
to several times faster), and (ii) SSP yields faster convergence than a fully asynchronous (i.e. no staleness guarantee) system. We explain SSPtable?s better performance in terms of algorithm progress
per iteration (quality) and iterations executed per unit time (quantity), and show that SSPtable hits a
?sweet spot? between quality and quantity that is missed by BSP and fully asynchronous systems.
2
Stale Synchronous Parallel Model of Computation
We begin with an informal explanation of SSP: assume a collection of P workers, each of which
makes additive updates to a shared parameter x
x + u at regular intervals called clocks. Clocks
are similar to iterations, and represent some unit of progress by an ML algorithm. Every worker
1
For example, the parameter ? might be the topic-word distributions in LDA, or the factor matrices in a
matrix decomposition, while the updates could be adding or removing counts to topic-word or document-word
tables in LDA, or stochastic gradient steps in a matrix decomposition.
2
has its own integer-valued clock c, and workers only commit their updates at the end of each clock.
Updates may not be immediately visible to other workers trying to read x ? in other words, workers
only see effects from a ?stale? subset of updates. The idea is that, with staleness, workers can retrieve
updates from caches on the same machine (fast) instead of querying the parameter server over the
network (slow). Given a user-chosen staleness threshold s 0, SSP enforces the following bounded
staleness conditions (see Figure 1 for a graphical illustration):
? The slowest and fastest workers must be ? s clocks apart ? otherwise, the fastest worker is
forced to wait for the slowest worker to catch up.
? When a worker with clock c commits an update u, that u is timestamped with time c.
? When a worker with clock c reads x, it will always see effects from all u with timestamp ?
c s 1. It may also see some u with timestamp > c s 1 from other workers.
? Read-my-writes: A worker p will always see the effects of its own updates up .
Since the fastest and slowest workers are
? s clocks apart, a worker reading x at
clock c will see all updates with timestamps in [0, c
s
1], plus a (possibly empty) ?adaptive? subset of updates in
the range [c s, c + s 1]. Note that
when s = 0, the ?guaranteed? range becomes [0, c 1] while the adaptive range
becomes empty, which is exactly the Bulk
Synchronous Parallel model of computation. Let us look at how SSP applies to an
example ML algorithm.
SSP: Bounded Staleness and Clocks
Here, Worker 1 must wait on
further reads, until Worker 2
has reached clock 4
Staleness Threshold 3
Worker 1
Worker progress
Worker 2
Updates visible to
all workers
Worker 3
Updates visible to Worker 1,
due to read-my-writes
Worker 4
Updates not necessarily
visible to Worker 1
0
1
2
3
4
5
6
7
8
9
Clock
Figure 1: Bounded Staleness under the SSP Model
2.1 An example: Stochastic Gradient Descent for Matrix Problems
The Stochastic Gradient Descent (SGD) [17, 12] algorithm optimizes an objective function by applying gradient descent to random subsets of the data. Consider a matrix completion task, which
involves decomposing an N ? M matrix D into two low-rank matrices LR ? D, where L, R have
sizes N ? K and K ? M (for a user-specified K). The data matrix D may have missing entries,
corresponding to missing data. Concretely, D could be a matrix of users against products, with Dij
representing user i?s rating of product j. Because users do not rate all possible products, the goal is
to predict ratings for missing entries Dab given known entries Dij . If we found low-rank matrices
L, R such that Li? ? R?j ? Dij for all known entries Dij , we could then predict Dab = La? ? R?b for
unknown entries Dab .
To perform the decomposition, let us minimize the squared difference between each known entry
Dij and its prediction Li? ? R?j (note that other loss functions and regularizers are also possible):
min
L,R
X
K
X
Dij
2
Lik Rkj
(1)
.
k=1
(i,j)2Data
As a first step towards SGD, consider solving Eq (1) using coordinate gradient descent on L, R:
@OMF
=
@Lik
X
@OMF
=
@Rkj
(a = i) [ 2Dab Rkb + 2La? R?b Rkb ] ,
(a,b)2Data
X
(b = j) [ 2Dab Lak + 2La? R?b Lak ]
(a,b)2Data
where OMF is the objective in Eq(1), and (a = i) equals 1 if a = i, and 0 otherwise. This can be
transformed into an SGD algorithm by replacing the full sum over entries (a, b) with a subsample
(with appropriate reweighting). The entries Dab can then be distributed over multiple workers, and
their gradients computed in parallel [12].
We assume that D is ?tall?, i.e. N > M (or transpose D so this is true), and partition the rows of
D and L over the processors. Only R needs to be shared among all processors, so we let it be the
SSP shared parameter x := R. SSP allows many workers to read/write to R with minimal waiting,
though the workers will only see stale values of R. This tradeoff is beneficial because without
staleness, the workers must wait for a long time when reading R from the server (as our experiments
will show). While having stale values of R decreases convergence progress per iteration, SSP more
than makes up by enabling significantly more iterations per minute, compared to fully synchronous
systems. Thus, SSP yields more convergence progress per minute, i.e. faster convergence.
3
Note that SSP is not limited to stochastic gradient matrix algorithms: it can also be applied to parallel
collapsed sampling on topic models [2] (by storing the word-topic and document-topic tables in x),
parallel coordinate descent on Lasso regression [5] (by storing the regression coefficients in x), as
well as any other parallel algorithm or model with shared parameters that all workers need read/write
access to. Our experiments will show that SSP performs better than bulk synchronous parallel and
asynchronous systems for matrix completion, topic modeling and Lasso regression.
3
SSPtable: an Efficient SSP System
Client process
An ideal SSP implementation would fully exploit the leeway granted by the SSP?s bounded staleness property,
in order to balance the time workers spend waiting on
reads with the need for freshness in the shared data. This
section describes our initial implementation of SSPtable,
which is a parameter server conforming to the SSP model,
and that can be run on many server machines at once (distributed). Our experiments with this SSPtable implementation shows that SSP can indeed improve convergence
rates for several ML models and algorithms, while further tuning of cache management policies could further
improve the performance of SSPtable.
Application
Application
thread
Application
thread
Application
thread
thread
Thread
Thread
cache
Thread
cache
Thread
cache
cache
Table
Tableserver
server
Table
Tableserver
server
Table data
Pending
requests
Process
cache
Figure 2: Cache structure of SSPtable, with
multiple server shards
SSPtable follows a distributed client-server architecture. Clients access shared parameters using a
client library, which maintains a machine-wide process cache and optional per-thread2 thread caches
(Figure 2); the latter are useful for improving performance, by reducing inter-thread synchronization
(which forces workers to wait) when a client ML program executes multiple worker threads on each
of multiple cores of a client machine. The server parameter state is divided (sharded) over multiple
server machines, and a normal configuration would include a server process on each of the client
machines. Programming with SSPtable follows a simple table-based API for reading/writing to
shared parameters x (for example, the matrix R in the SGD example of Section 2.1):
? Table Organization: SSPtable supports an unlimited number of tables, which are divided into
rows, which are further subdivided into elements. These tables are used to store x.
? read row(table,row,s): Retrieve a table-row with staleness threshold s. The user can
then query individual row elements.
? inc(table,row,el,val): Increase a table-row-element by val, which can be negative.
These changes are not propagated to the servers until the next call to clock().
? clock(): Inform all servers that the current thread/processor has completed one clock, and
commit all outstanding inc()s to the servers.
Any number of read row() and inc() calls can be made in-between calls to clock(). Different thread workers are permitted to be at different clocks, however, bounded staleness requires that
the fastest and slowest threads be no more than s clocks apart. In this situation, SSPtable forces the
fastest thread to block (i.e. wait) on calls to read row(), until the slowest thread has caught up.
To maintain the ?read-my-writes? property, we use a write-back policy: all writes are immediately
committed to the thread caches, and are flushed to the process cache and servers upon clock().
To maintain bounded staleness while minimizing wait times on read row() operations, SSPtable
uses the following cache protocol: Let every table-row in a thread or process cache be endowed
with a clock rthread or rproc respectively. Let every thread worker be endowed with a clock c, equal
to the number of times it has called clock(). Finally, define the server clock cserver to be the
minimum over all thread clocks c. When a thread with clock c requests a table-row, it first checks
its thread cache. If the row is cached with clock rthread c s, then it reads the row. Otherwise,
it checks the process cache next ? if the row is cached with clock rproc c s, then it reads the
row. At this point, no network traffic has been incurred yet. However, if both caches miss, then a
network request is sent to the server (which forces the thread to wait for a reply). The server returns
its view of the table-row as well as the clock cserver . Because the fastest and slowest threads can
be no more than s clocks apart, and because a thread?s updates are sent to the server whenever it
calls clock(), the returned server view always satisfies the bounded staleness requirements for the
2
We assume that every computation thread corresponds to one ML algorithm worker.
4
asking thread. After fetching a row from the server, the corresponding entry in the thread/process
caches and the clocks rthread , rproc are then overwritten with the server view and clock cserver .
A beneficial consequence of this cache protocol is that the slowest thread only performs costly server
reads every s clocks. Faster threads may perform server reads more frequently, and as frequently as
every clock if they are consistently waiting for the slowest thread?s updates. This distinction in work
per thread does not occur in BSP, wherein every thread must read from the server on every clock.
Thus, SSP not only reduces overall network traffic (thus reducing wait times for all server reads), but
also allows slow, straggler threads to avoid server reads in some iterations. Hence, the slow threads
naturally catch up ? in turn allowing fast threads to proceed instead of waiting for them. In this
manner, SSP maximizes the time each machine spends on useful computation, rather than waiting.
4
Theoretical Analysis of SSP
Formally, the SSP model supports operations x
x (z ? y), where x, y are members of a ring
with an abelian operator (such as addition), and a multiplication operator ? such that z ? y = y0
where y0 is also in the ring. In the context of ML, we shall focus on addition and multiplication
over real vectors x, y and scalar coefficients z, i.e. x
x + (zy); such operations can be found
in the update equations of many ML inference algorithms, such as gradient descent [12], coordinate
descent [5] and collapsed Gibbs sampling [2]. In what follows, we shall informally refer to x as the
?system state?, u = zy as an ?update?, and to the operation x
x + u as ?writing an update?.
We assume that P workers write updates at regular time intervals (referred to as ?clocks?). Let up,c
be the update written by worker p at clock c through the write operation x
x + up,c . The updates
up,c are a function of the system state x, and under the SSP model, different workers will ?see?
? p,c be the noisy state read by worker p at clock c,
different, noisy versions of the true state x. Let x
implying that up,c = G(?
xp,c ) for some function G. We now formally re-state bounded staleness,
? p,c can take:
which is the key SSP condition that bounds the possible values x
? p,c is equal to
SSP Condition (Bounded Staleness): Fix a staleness s. Then, the noisy state x
2
? p,c = x0 + 4
x
|
cX
s 1
P
X
c0 =1 p0 =1
{z
3
up0 ,c0 5 +
}
guaranteed pre-window updates
2
4
|
c 1
X
c0 =c s
3
up,c0 5
{z
}
guaranteed read-my-writes updates
2
+4
|
X
(p0 ,c0 )2Sp,c
{z
3
up0 ,c0 5,
(2)
}
best-effort in-window updates
where Sp,c ? Wp,c = ([1, P ] \ {p}) ? [c s, c + s 1] is some subset of the updates u written
in the width-2s ?window? Wp,c , which ranges from clock c s to c + s 1 and does not include
? p,c consists of three parts:
updates from worker p. In other words, the noisy state x
1. Guaranteed ?pre-window? updates from clock 0 to c s 1, over all workers.
2. Guaranteed ?read-my-writes? set {(p, c s), . . . , (p, c 1)} that covers all ?in-window?
updates made by the querying worker3 p.
3. Best-effort ?in-window? updates Sp,c from the width-2s window4 [c s, c + s 1] (not
counting updates from worker p). An SSP implementation should try to deliver as many
updates from Sp,c as possible, but may choose not to depending on conditions.
Notice that Sp,c is specific to worker p at clock c; other workers at different clocks will observe
different S. Also, observe that SSP generalizes the Bulk Synchronous Parallel (BSP) model:
BSP Corollary: Under zero staleness s = 0, SSP reduces to BSP. Proof: s = 0 implies [c, c +
? p,c exactly consists of all updates until clock c 1. ?
s 1] = ;, and therefore x
Our key tool for convergence analysis is to define a reference sequence of states xt , informally
referred to as the ?true? sequence (this is different and unrelated to the SSPtable server?s view):
xt = x0 +
t
X
t0 =0
u t0 ,
where ut := ut mod P ,bt/P c .
In other words, we sum updates by first looping over workers (t mod P ), then over clocks bt/P c.
? p,c :
We can now bound the difference between the ?true? sequence xt and the noisy views x
3
This is a ?read-my-writes? or self-synchronization property, i.e. workers will always see any updates they
make. Having such a property makes sense because self-synchronization does not incur a network cost.
4
The width 2s is only an upper bound for the slowest worker. The fastest worker with clock cmax has a
width-s window [cmax s, cmax 1], simply because no updates for clocks cmax have been written yet.
5
Lemma 1: Assume s
? t := x
? t mod P ,bt/P c , so that
1, and let x
"
# "
#
X
X
? t = xt
x
ui +
ui ,
|
i2At
{z
}
missing updates
|
i2Bt
{z
(3)
}
extra updates
? t and xt into At , the index set of updates ui
where we have decomposed the difference between x
? t (w.r.t. xt ), and Bt , the index set of ?extra? updates in x
? t but not in xt . We
that are missing from x
then claim that |At | + |Bt | ? 2s(P 1), and furthermore, min(At [ Bt ) max(1, t (s + 1)P ),
and max(At [ Bt ) ? t + sP .
Proof: Comparing Eq. (3) with (2), we see that the extra updates obey Bt ? St mod P ,bt/P c ,
while the missing updates obey At ? (Wt mod P ,bt/P c \ St mod P ,bt/P c ). Because |Wt mod P ,bt/P c | =
2s(P 1), the first claim immediately follows. The second and third claims follow from looking at
the left- and right-most boundaries of Wt mod P ,bt/P c . ?
? t only differ by at most 2s(P 1)
Lemma 1 basically says that the ?true? state xt and the noisy state x
updates ut , and that these updates cannot be more than (s+1)P steps away from t. These properties
can be used to prove convergence bounds for various algorithms; in this paper, we shall focus on
stochastic gradient descent SGD [17]:
Theorem 1 (SGD under SSP): Suppose we want to find the minimizer x? of a convex function
PT
f (x) = T1 t=1 ft (x), via gradient descent on one component rft at a time. We assume the
components ft are also convex. Let ut := ?t rft (?
xt ), where ?t = pt with = p F
for
L
2(s+1)P
certain constants F, L. Then, under suitable conditions (ft are L-Lipschitz and the distance between
two points D(xkx0 ) ? F 2 ),
"
#
r
T
1X
2(s + 1)P
?
R[X] :=
ft (?
xt )
f (x ) ? 4F L
T t=1
T
? t converge in expectation to the true view x? (as measured
This means that the noisy worker views x
by the function f (), and at rate O(T 1/2 )). We defer the proof to the appendix, noting that it
generally follows the analysis in Langford et al. [17], except in places where Lemma 1 is involved.
Our bound is also similar to [17], except that (1) their fixed delay ? has been replaced by our
staleness upper bound 2(s + 1)P , and (2) we have shown convergence of the noisy worker views
? t rather than a true sequence xt . Furthermore, because the constant factor 2(s + 1)P is only an
x
upper bound to the number of erroneous updates, SSP?s rate of convergence has a potentially tighter
constant factor than Langford et al.?s fixed staleness system (details are in the appendix).
5
Experiments
We show that the SSP model outperforms fully-synchronous models such as Bulk Synchronous
Parallel (BSP) that require workers to wait for each other on every iteration, as well as asynchronous
models with no model staleness guarantees. The general experimental details are:
? Computational models and implementation: SSP, BSP and Asynchronous5 . We used SSPtable for the
first two (BSP is just staleness 0 under SSP), and implemented the Asynchronous model using many of the
caching features of SSPtable (to keep the implementations comparable).
? ML models (and parallel algorithms): LDA Topic Modeling (collapsed Gibbs sampling), Matrix Factorization (stochastic gradient descent) and Lasso regression (coordinate gradient descent). All algorithms
were implemented using SSPtable?s parameter server interface. For TM and MF, we ran the algorithms in a
?full batch? mode (where the algorithm?s workers collectively touch every data point once per clock()),
as well as a ?10% minibatch? model (workers touch 10% of the data per clock()). Due to implementation limitations, we did not run Lasso under the Async model.
? Datasets: Topic Modeling: New York Times (N = 100m tokens, V = 100k terms, K = 100 topics),
Matrix Factorization: NetFlix (480k-by-18k matrix with 100m nonzeros, rank K = 100 decomposition),
Lasso regression: Synthetic dataset (N = 500 samples with P = 400k features6 ). We use a static data
partitioning strategy explained in the Appendix.
? Compute cluster: Multi-core blade servers connected by 10 Gbps Ethernet, running VMware ESX. We
use one virtual machine (VM) per physical machine. Each VM is configured with 8 cores (either 2.3GHz
or 2.5GHz each) and 23GB of RAM, running on top of Debian Linux 7.0.
5
6
The Asynchronous model is used in many ML frameworks, such as YahooLDA [2] and HogWild! [21].
This is the largest data size we could get the Lasso algorithm to converge on, under ideal BSP conditions.
6
Convergence Speed. Figure 3 shows objective vs. time plots for the three ML algorithms, over
several machine configurations. We are interested in how long each algorithm takes to reach a given
objective value, which corresponds to drawing horizontal lines on the plots. On each plot, we show
curves for BSP (zero staleness), Async, and SSP for the best staleness value
1 (we generally
omit the other SSP curves to reduce clutter). In all cases except Topic Modeling with 8 VMs, SSP
converges to a given objective value faster than BSP or Async. The gap between SSP and the other
systems increases with more VMs and smaller data batches, because both of these factors lead to
increased network communication ? which SSP is able to reduce via staleness. We also provide a
scalability-with-N -machines plot in the Appendix.
Computation Time vs Network Waiting Time. To understand why SSP performs better, we look
at how the Topic Modeling (TM) algorithm spends its time during a fixed number of clock()s. In
the 2nd row of Figure 3, we see that for any machine configuration, the TM algorithm spends roughly
the same amount of time on useful computation, regardless of the staleness value. However, the time
spent waiting for network communication drops rapidly with even a small increase in staleness,
allowing SSP to execute clock()s more quickly than BSP (staleness 0). Furthermore, the ratio of
network-to-compute time increases as we add more VMs, or use smaller data batches. At 32 VMs
and 10% data minibatches, the TM algorithm under BSP spends six times more time on network
communications than computation. In contrast, the optimal value of staleness, 32, exhibits a 1:1
ratio of communication to computation. Hence, the value of SSP lies in allowing ML algorithms
to perform far more useful computations per second, compared to the BSP model (e.g. Hadoop).
Similar observations hold for the MF and Lasso applications (graphs not shown for space reasons).
Iteration Quantity and Quality. The network-compute ratio only partially explains SSP?s behavior; we need to examine each clock()?s behavior to get a full picture. In the 3rd row of Figure 3,
we plot the number of clocks executed per worker per unit time for the TM algorithm, as well as
the objective value at each clock. Higher staleness values increase the number of clocks executed
per unit time, but decrease each clock?s progress towards convergence (as suggested by our theory);
MF and Lasso also exhibit similar behavior (graphs not shown). Thus, staleness is a tradeoff between iteration quantity and quality ? and because the iteration rate exhibits diminishing returns
with higher staleness values, there comes a point where additional staleness starts to hurt the rate of
convergence per time. This explains why the best staleness value in a given setting is some constant
0 < s < 1 ? hence, SSP can hit a ?sweet spot? between quality/quantity that BSP and Async do
not achieve. Automatically finding this sweet spot for a given problem is a subject for future work.
6
Related Work and Discussion
The idea of staleness has been explored before: in ML academia, it has been analyzed in the context of cyclic-delay architectures [17, 1], in which machines communicate with a central server (or
each other) under a fixed schedule (and hence fixed staleness). Even the bulk synchronous parallel (BSP) model inherently produces stale communications, the effects of which have been studied
for algorithms such as Lasso regression [5] and topic modeling [2]. Our work differs in that SSP
advocates bounded (rather than fixed) staleness to allow higher computational throughput via local
machine caches. Furthermore, SSP?s performance does not degrade when parameter updates frequently collide on the same vector elements, unlike asynchronous lock-free systems [21]. We note
that staleness has been informally explored in the industrial setting at large scales; our work provides
a first attempt at rigorously justifying staleness as a sound ML technique.
Distributed platforms such as Hadoop and GraphLab [18] are popular for large-scale ML. The
biggest difference between them and SSPtable is the programming model ? Hadoop uses a stateless
map-reduce model, while GraphLab uses stateful vertex programs organized into a graph. In contrast, SSPtable provides a convenient shared-memory programming model based on a table/matrix
API, making it easy to convert single-machine parallel ML algorithms into distributed versions. In
particular, the algorithms used in our experiments ? LDA, MF, Lasso ? are all straightforward
conversions of single-machine algorithms. Hadoop?s BSP execution model is a special case of SSP,
making SSPtable more general in that regard; however, Hadoop also provides fault-tolerance and
distributed filesystem features that SSPtable does not cover. Finally, there exist special-purpose
tools such as Vowpal Wabbit [16] and YahooLDA [2]. Whereas these systems have been targeted at
a subset of ML algorithms, SSPtable can be used by any ML algorithm that tolerates stale updates.
The distributed systems community has typically examined staleness in the context of consistency
models. The TACT model [26] describes consistency along three dimensions: numerical error, order
error, and staleness. Other work [24] attempts to classify existing systems according to a number
7
Topic Modeling: Convergence
32 VMs
32 VMs, 10% minibatches
8 VMs
Objective function versus time
LDA 8 machines (64 threads), Full data per iter
Objective function versus time
LDA 32 machines (256 threads), Full data per iter
-1.00E+09
-1.05E+09
-1.10E+09
500
BSP (stale 0)
stale 2
async
1000
1500
2000
-9.50E+08
Log-Likelihood
Log-Likelihood
-9.50E+08
Objective function versus time
LDA 32 machines (256 threads), 10% data per iter
-9.00E+08
0
-1.15E+09
-1.20E+09
-1.25E+09
-9.00E+08
0
500
1000
1500
-1.00E+09
-1.05E+09
BSP (stale 0)
stale 4
async
-1.10E+09
-1.15E+09
-1.20E+09
-1.25E+09
-1.30E+09
-1.30E+09
Seconds
2000
-9.50E+08
Log-Likelihood
-9.00E+08
0
500
1000
1500
-1.05E+09
-1.10E+09
BSP (stale 0)
stale 32
async
-1.15E+09
-1.20E+09
-1.25E+09
-1.30E+09
Seconds
2000
-1.00E+09
Seconds
Topic Modeling: Computation Time vs Network Waiting Time
8 VMs
32 VMs
32 VMs, 10% minibatches
Time Breakdown: Compute vs Network
LDA 32 machines, Full data
Time Breakdown: Compute vs Network
LDA 32 machines, 10% data
4000
8000
3500
Compute time
7000
Network waiting time
3000
2000
1500
1000
4
16
32
2
4
6
8
0
stale 8
stale 16
stale 24
stale 32
stale 40
8000
stale 48
400
600
stale 24
-1.15E+09
stale 32
-1.20E+09
stale 40
-1.25E+09
stale 48
0
500
1000
1500
2000
2500
3000
3500
4000
Seconds
2.00E+09
1.80E+09
2.00E+09
1.90E+09
1.80E+09
1.60E+09
1.70E+09
1.40E+09
1.60E+09
5000
Seconds
6000
Objective function versus time
MF 32 machines (256 threads), 10% data per iter
7000
BSP (stale 0)
stale 15
async
2.10E+09
Objective
2.20E+09
4000
4.40E-01
4.20E-01
Objective function versus time
MF 32 machines (256 threads), Full data per iter
BSP (stale 0)
stale 4
async
2.40E+09
3000
4.50E-01
Iterations (clocks)
2.20E+09
2000
4.60E-01
4.30E-01
8000
BSP (stale 0)
stale 32
async
2.20E+09
2.10E+09
Objective
Objective function versus time
MF 8 machines (64 threads), Full data per iter
2.60E+09
1000
48
Matrix Factorization: Convergence
32 VMs
32 VMs, 10% minibatches
8 VMs
0
4.70E-01
stale 16
-1.10E+09
40
BSP (stale 0)
stale 10
stale 20
stale 40
stale 80
4.80E-01
1000
stale 8
-1.05E+09
32
Objective function versus time
Lasso 16 machines (128 threads)
800
BSP (stale 0)
-1.00E+09
-1.30E+09
Seconds
Objective
200
Objective
BSP (stale 0)
0
24
Lasso: Convergence
16 VMs
Quality: objective versus iterations
LDA 32 machines, 10% data
-9.50E+08
6000
16
Staleness
-9.00E+08
Log-Likelihood
Iterations (clocks)
Quantity: iterations versus time
LDA 32 machines, 10% data
1000
900
800
700
600
500
400
300
200
100
0
4000
8
Staleness
Topic Modeling: Iteration Quantity and Quality
32 VMs, 10% minibatches
32 VMs, 10% minibatches
2000
3000
0
0
48
Staleness
0
4000
1000
0
2
Compute time
5000
2000
500
0
Network waiting time
6000
Compute time
2500
Seconds
Network waiting time
Seconds
Seconds
Time Breakdown: Compute vs Network
LDA 8 machines, Full data
1800
1600
1400
1200
1000
800
600
400
200
0
2.00E+09
1.90E+09
1.80E+09
1.70E+09
1.60E+09
0
1000
2000
3000
4000
5000
Seconds
6000
7000
8000
0
1000
2000
3000
4000
5000
6000
7000
8000
Seconds
Figure 3: Experimental results: SSP, BSP and Asynchronous parameter servers running Topic Modeling,
Matrix Factorization and Lasso regression. The Convergence graphs plot objective function (i.e. solution
quality) against time. For Topic Modeling, we also plot computation time vs network waiting time, as well as
how staleness affects iteration (clock) frequency (Quantity) and objective improvement per iteration (Quality).
of consistency properties, specifically naming the concept of bounded staleness. The vector clocks
used in SSPtable are similar to those in Fidge [11] and Mattern [20], which were in turn inspired
by Lamport clocks [15]. However, SSPtable uses vector clocks to track the freshness of the data,
rather than causal relationships between updates. [8] gives an informal definition of the SSP model,
motivated by the need to reduce straggler effects in large compute clusters.
In databases, bounded staleness has been applied to improve update and query performance. LazyBase [7] allows staleness bounds to be configured on a per-query basis, and uses this relaxed staleness to improve both query and update performance. FAS [23] keeps data replicated in a number of
databases, each providing a different freshness/performance tradeoff. Data stream warehouses [13]
collect data about timestamped events, and provide different consistency depending on the freshness
of the data. Staleness (or freshness/timeliness) has also been applied in other fields such as sensor
networks [14], dynamic web content generation [3], web caching [6], and information systems [4].
Acknowledgments
Qirong Ho is supported by an NSS-PhD Fellowship from A-STAR, Singapore. This work is supported in
part by NIH 1R01GM087694 and 1R01GM093156, DARPA FA87501220324, and NSF IIS1111142 to Eric
P. Xing. We thank the member companies of the PDL Consortium (Actifio, APC, EMC, Emulex, Facebook,
Fusion-IO, Google, HP, Hitachi, Huawei, Intel, Microsoft, NEC, NetApp, Oracle, Panasas, Samsung, Seagate,
Symantec, VMware, Western Digital) for their interest, insights, feedback, and support. This work is supported
in part by Intel via the Intel Science and Technology Center for Cloud Computing (ISTC-CC) and hardware
donations from Intel and NetApp.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In Decision and Control (CDC),
2012 IEEE 51st Annual Conference on, pages 5451?5452. IEEE, 2012.
[2] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable
models. In WSDM, pages 123?132, 2012.
[3] N. R. Alexandros Labrinidis. Balancing performance and data freshness in web database servers. pages
pp. 393 ? 404, September 2003.
[4] M. Bouzeghoub. A framework for analysis of data freshness. In Proceedings of the 2004 international
workshop on Information quality in information systems, IQIS ?04, pages 59?67, 2004.
[5] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for l1-regularized loss
minimization. In International Conference on Machine Learning (ICML 2011), June 2011.
[6] L. Bright and L. Raschid. Using latency-recency profiles for data delivery on the web. In Proceedings of
the 28th international conference on Very Large Data Bases, VLDB ?02, pages 550?561, 2002.
[7] J. Cipar, G. Ganger, K. Keeton, C. B. Morrey, III, C. A. Soules, and A. Veitch. LazyBase: trading
freshness for performance in a scalable database. In Proceedings of the 7th ACM european conference on
Computer Systems, pages 169?182, 2012.
[8] J. Cipar, Q. Ho, J. K. Kim, S. Lee, G. R. Ganger, G. Gibson, K. Keeton, and E. Xing. Solving the straggler
problem with bounded staleness. In HotOS ?13. Usenix, 2013.
[9] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker,
K. Yang, and A. Ng. Large scale distributed deep networks. In NIPS 2012, 2012.
[10] Facebook. www.facebook.com/note.php?note_id=10150388519243859, January 2013.
[11] C. J. Fidge. Timestamps in Message-Passing Systems that Preserve the Partial Ordering. In 11th Australian Computer Science Conference, pages 55?66, University of Queensland, Australia, 1988.
[12] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis. Large-scale matrix factorization with distributed
stochastic gradient descent. In KDD, pages 69?77. ACM, 2011.
[13] L. Golab and T. Johnson. Consistency in a stream warehouse. In CIDR 2011, pages 114?122.
[14] C.-T. Huang. Loft: Low-overhead freshness transmission in sensor networks. In SUTC 2008, pages
241?248, Washington, DC, USA, 2008. IEEE Computer Society.
[15] L. Lamport. Time, clocks, and the ordering of events in a distributed system. Commun. ACM, 21(7):558?
565, July 1978.
[16] J. Langford, L. Li, and A. Strehl. Vowpal wabbit online learning project, 2007.
[17] J. Langford, A. J. Smola, and M. Zinkevich. Slow learners are fast. In Advances in Neural Information
Processing Systems, pages 2331?2339, 2009.
[18] Y. Low, G. Joseph, K. Aapo, D. Bickson, C. Guestrin, and M. Hellerstein, Joseph. Distributed GraphLab:
A framework for machine learning and data mining in the cloud. PVLDB, 2012.
[19] G. Malewicz, M. H. Austern, A. J. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski. Pregel: a
system for large-scale graph processing. In Proceedings of the 2010 International Conference on Management of Data, pages 135?146. ACM, 2010.
[20] F. Mattern. Virtual time and global states of distributed systems. In C. M. et al., editor, Proc. Workshop
on Parallel and Distributed Algorithms, pages 215?226, North-Holland / Elsevier, 1989.
[21] F. Niu, B. Recht, C. R?e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic
gradient descent. In NIPS, 2011.
[22] R. Power and J. Li. Piccolo: building fast, distributed programs with partitioned tables. In Proceedings
of the USENIX conference on Operating systems design and implementation (OSDI), pages 1?14, 2010.
[23] U. R?ohm, K. B?ohm, H.-J. Schek, and H. Schuldt. Fas: a freshness-sensitive coordination middleware for
a cluster of olap components. In VLDB 2002, pages 754?765. VLDB Endowment, 2002.
[24] D. Terry. Replicated data consistency explained through baseball. Technical Report MSR-TR-2011-137,
Microsoft Research, October 2011.
[25] Yahoo! http://webscope.sandbox.yahoo.com/catalog.php?datatype=g, 2013.
[26] H. Yu and A. Vahdat. Design and evaluation of a conit-based continuous consistency model for replicated
services. ACM Transactions on Computer Systems, 20(3):239?282, Aug. 2002.
[27] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: cluster computing with
working sets. In Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, 2010.
[28] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. Advances in
Neural Information Processing Systems, 23(23):1?9, 2010.
9
| 4894 |@word msr:1 version:5 proportion:1 nd:2 c0:6 strong:1 cipar:3 vldb:3 overwritten:1 queensland:1 decomposition:4 p0:2 sgd:6 stateless:1 asks:1 tr:1 blade:1 initial:1 cyclic:4 configuration:3 document:2 franklin:1 outperforms:1 existing:2 bradley:1 current:2 com:3 comparing:1 parameter1:1 soules:1 yet:2 must:5 conforming:1 written:3 devin:1 additive:1 visible:4 timestamps:2 partition:1 academia:1 kdd:1 numerical:1 plot:7 drop:1 update:57 bickson:2 v:7 implying:1 intelligence:1 advancement:1 pvldb:1 core:3 lr:1 alexandros:1 provides:7 node:1 along:1 consists:2 prove:1 warehouse:2 advocate:1 overhead:1 schek:1 manner:2 x0:2 inter:1 indeed:1 behavior:3 yahoolda:3 frequently:3 examine:1 multi:1 brain:1 roughly:1 inspired:1 wsdm:1 decomposed:1 automatically:1 company:1 cache:22 window:7 becomes:2 begin:1 project:1 bounded:16 moreover:1 maximizes:3 unrelated:1 what:1 pursue:1 spends:5 r01gm093156:1 finding:1 guarantee:7 every:11 exactly:2 hit:2 partitioning:2 unit:4 control:1 omit:1 algortihms:1 service:2 engineering:1 local:2 t1:1 before:1 consequence:1 api:2 io:1 sismanis:1 vahdat:1 niu:1 path:1 might:1 plus:1 studied:1 examined:1 collect:1 fastest:7 limited:2 factorization:7 range:5 practical:1 acknowledgment:1 enforces:1 horn:1 block:1 implement:3 differs:1 writes:7 spot:3 gibson:2 empirical:1 significantly:2 convenient:1 pre:3 word:7 regular:2 fetching:1 wait:11 consortium:1 get:4 cannot:3 selection:1 operator:2 storage:2 collapsed:4 applying:1 writing:2 context:3 leeway:1 recency:1 www:1 map:1 dean:1 missing:7 vowpal:2 center:1 straightforward:1 regardless:1 zinkevich:2 caught:1 convex:2 spark:2 immediately:3 communicating:2 insight:1 retrieve:2 coordinate:6 hurt:1 limiting:1 pt:2 suppose:1 massive:2 user:8 programming:3 us:5 pa:3 element:4 updating:1 breakdown:3 database:4 ft:4 cloud:3 electrical:1 ensures:1 connected:1 ranzato:1 ordering:2 decrease:2 reducer:1 ran:1 shard:2 ui:3 gibbon:2 rigorously:1 dynamic:1 straggler:3 solving:2 incur:1 technically:1 deliver:2 upon:1 eric:2 learner:1 completely:1 basis:1 baseball:1 workload:1 collide:1 darpa:1 samsung:1 awaiting:1 various:1 forced:1 fast:5 effective:3 query:4 spend:6 posed:1 valued:1 say:1 drawing:1 otherwise:3 dab:6 commit:2 itself:1 noisy:8 associative:1 features6:1 online:1 advantage:1 sequence:4 unprecedented:1 wabbit:2 czajkowski:1 propose:2 product:3 rapidly:1 qirong:2 achieve:1 straggling:1 scalability:3 billion:1 convergence:18 cluster:7 empty:2 requirement:1 transmission:1 produce:1 cached:2 ring:2 converges:1 help:1 depending:3 tall:1 completion:2 spent:1 donation:1 measured:1 tolerates:1 school:1 progress:6 eq:3 aug:1 dividing:1 implemented:3 c:1 involves:1 implies:1 ethernet:1 trading:1 differ:1 direction:1 australian:1 come:1 stochastic:12 reductionist:1 australia:1 virtual:2 hitachi:1 explains:2 require:1 subdivided:1 fix:1 sandbox:1 tighter:1 hold:1 wright:1 normal:1 predict:2 claim:3 achieves:1 purpose:1 gbps:1 proc:1 coordination:1 sensitive:1 largest:1 correctness:5 tool:2 istc:1 minimization:1 sensor:2 always:4 rather:4 avoid:1 caching:2 corollary:1 focus:2 june:1 improvement:1 consistently:1 rank:3 likelihood:4 kyrola:1 slowest:9 check:2 contrast:2 industrial:1 kim:2 sense:1 elsevier:1 inference:3 osdi:1 huawei:1 el:1 bt:13 typically:1 diminishing:1 transformed:1 interested:1 overall:1 aforementioned:1 among:1 yahoo:2 platform:1 special:2 rft:2 timestamp:2 equal:3 once:2 never:1 having:2 field:1 sampling:4 ng:1 washington:1 broad:1 look:2 icml:1 throughput:2 yu:1 future:1 report:1 sweet:3 distinguishes:1 modern:1 vmware:2 simultaneously:1 preserve:1 individual:1 delayed:2 replaced:1 maintain:2 microsoft:2 attempt:2 organization:1 centralized:1 interest:1 message:1 mining:1 chowdhury:1 evaluation:1 analyzed:1 regularizers:1 allocating:1 symantec:1 worker:77 partial:1 re:1 causal:1 theoretical:4 minimal:1 increased:1 classify:1 modeling:12 asking:1 cover:2 cost:1 vertex:1 imperative:1 subset:5 entry:9 hundred:1 delay:5 dij:6 johnson:1 too:1 ohm:2 gregory:1 my:6 combined:1 synthetic:1 st:3 recht:1 international:4 lee:2 vm:2 emc:1 quickly:1 linux:1 squared:1 central:2 management:2 opposed:1 containing:1 possibly:1 choose:1 huang:1 return:2 li:5 gemulla:1 star:1 north:1 coefficient:2 inc:3 configured:2 stream:2 stoica:1 view:8 try:1 lab:1 hogwild:2 doing:3 traffic:2 reached:1 xing:3 netflix:1 maintains:1 parallel:21 start:1 defer:1 nijkamp:1 contribution:2 minimize:1 bright:1 php:2 pdl:1 who:1 lamport:2 yield:3 zy:2 basically:1 served:1 cc:1 processor:3 executes:1 explain:1 inform:1 reach:1 whenever:2 facebook:3 definition:1 against:2 frequency:1 involved:1 james:1 pp:1 tucker:1 naturally:2 proof:4 olap:1 static:1 propagated:1 dataset:1 popular:2 ut:4 organized:1 schedule:1 back:1 higher:3 follow:2 methodology:1 permitted:1 maximally:1 wherein:1 leiser:1 execute:1 though:1 furthermore:7 just:2 smola:3 reply:1 clock:62 until:4 langford:4 schuldt:1 horizontal:1 web:4 replacing:1 touch:2 working:1 reweighting:1 western:1 bsp:30 google:2 minibatch:1 mode:1 quality:10 lda:12 stale:43 building:2 effect:7 phillip:2 concept:2 true:8 usa:1 hence:5 read:28 wp:2 staleness:56 during:1 width:4 self:2 trying:1 apc:1 demonstrate:1 performs:3 duchi:1 interface:3 l1:1 spending:1 netapp:2 nih:1 common:1 physical:1 volume:1 million:1 shenker:1 rkb:2 mellon:2 significant:3 refer:1 austern:1 gibbs:3 tuning:1 rd:1 consistency:7 narayanamurthy:1 hp:1 access:4 r01gm087694:1 operating:1 pregel:2 add:1 base:1 own:3 recent:3 optimizes:1 apart:4 commun:1 store:3 certain:1 server:46 fault:1 societal:1 freshness:10 guestrin:2 minimum:1 additional:1 relaxed:1 parallelized:3 converge:3 maximize:1 corrado:1 july:1 ii:1 lik:2 full:10 sound:2 desirable:1 multiple:6 timestamped:2 reduces:2 nonzeros:1 technical:1 faster:8 ahmed:1 long:2 divided:2 justifying:1 naming:1 controlled:1 prediction:1 scalable:2 regression:8 aapo:1 cmu:2 expectation:1 iteration:21 represent:1 monga:1 agarwal:1 addition:3 want:1 whereas:1 tact:1 interval:2 fellowship:1 parallelization:1 extra:3 unlike:1 webscope:1 subject:1 sent:2 member:2 leveraging:1 mod:8 bik:1 integer:1 call:5 counting:1 ideal:2 noting:1 iii:1 easy:2 yang:1 qho:1 variety:1 finish:1 fit:2 affect:1 architecture:3 lasso:14 reduce:4 idea:4 tm:5 tradeoff:3 synchronous:15 thread:44 t0:2 six:1 motivated:1 gb:1 granted:1 effort:3 returned:1 proceed:1 york:1 passing:1 deep:2 useful:8 generally:2 latency:1 informally:3 amount:1 clutter:1 hardware:1 stateful:1 http:1 exist:2 nsf:1 singapore:1 notice:1 timeliness:1 async:10 per:23 track:1 bulk:8 loft:1 carnegie:2 write:7 shall:3 waiting:17 key:4 iter:6 threshold:4 demonstrating:1 ram:1 graph:6 excludes:1 sum:3 convert:1 run:5 communicate:1 place:1 missed:1 gonzalez:1 decision:1 appendix:4 delivery:1 comparable:1 bound:8 guaranteed:5 oracle:1 annual:1 occur:1 looping:1 unlimited:1 xkx0:1 aspect:1 speed:1 min:2 relatively:1 according:1 combination:1 request:3 cui:1 inflexible:1 beneficial:2 describes:2 y0:2 smaller:2 partitioned:1 joseph:2 making:2 explained:2 equation:1 turn:2 count:1 needed:1 end:3 dehnert:1 informal:2 generalizes:2 decomposing:1 operation:5 endowed:2 observe:2 obey:2 away:1 appropriate:1 hellerstein:1 batch:3 ho:3 top:1 running:3 subsampling:1 include:2 completed:1 graphical:1 lock:3 cmax:4 exploit:2 kyu:1 commits:1 rkj:2 society:1 objective:20 malewicz:1 quantity:8 strategy:2 costly:1 fa:2 ssp:63 exhibit:3 gradient:17 september:1 distance:1 thank:1 veitch:1 degrade:1 topic:20 haas:1 reason:1 index:2 relationship:1 illustration:1 providing:4 balance:1 minimizing:1 ratio:3 executed:3 october:1 abelian:1 potentially:2 negative:1 implementation:11 design:2 policy:2 unknown:1 perform:4 allowing:3 upper:3 conversion:1 observation:1 fa87501220324:1 datasets:1 enabling:1 jin:1 descent:17 optional:1 january:1 situation:1 communication:7 committed:1 looking:1 dc:1 usenix:3 community:1 parallelizing:1 aly:1 rating:2 specified:1 catalog:1 distinction:1 nip:2 beyond:1 able:1 suggested:1 middleware:1 reading:7 challenge:1 program:5 max:2 memory:1 explanation:1 terry:1 power:3 suitable:1 event:2 hot:1 client:7 force:3 regularized:1 sharded:1 representing:1 older:2 scheme:3 improve:4 technology:1 epxing:1 library:1 picture:1 catch:3 val:2 multiplication:2 synchronization:6 fully:6 loss:2 cdc:1 generation:2 limitation:1 zaharia:1 querying:2 versus:10 age:2 digital:1 incurred:1 xp:1 editor:1 storing:2 balancing:1 strehl:1 row:21 endowment:1 token:1 supported:3 last:1 asynchronous:10 free:3 transpose:1 enjoys:1 allow:1 understand:1 senior:1 wide:3 distributed:28 ghz:2 boundary:1 curve:2 regard:1 tolerance:1 dimension:1 feedback:1 concretely:1 made:3 collection:2 adaptive:2 replicated:3 garth:2 far:1 social:1 transaction:1 keep:2 ml:32 graphlab:4 global:1 pittsburgh:3 continuous:1 latent:1 why:2 table:19 lak:2 pending:1 inherently:1 hadoop:7 improving:1 necessarily:1 european:1 protocol:2 sp:6 did:1 spread:1 up0:2 weimer:1 subsample:1 arise:1 profile:1 intel:6 referred:3 biggest:1 fashion:2 slow:5 n:1 vms:16 mao:1 lie:1 third:1 removing:1 ganger:4 minute:2 theorem:1 specific:1 xt:11 erroneous:1 explored:2 fusion:1 workshop:2 adding:1 piccolo:1 mattern:2 phd:1 nec:1 execution:1 commutative:1 gap:1 chen:1 mf:7 cx:1 simply:1 explore:1 partially:1 scalar:1 holland:1 applies:1 collectively:1 corresponds:2 minimizer:1 satisfies:1 acm:5 minibatches:6 goal:2 targeted:1 towards:4 twofold:1 shared:11 lipschitz:1 shortcut:1 change:1 content:1 specifically:1 except:3 reducing:2 wt:3 miss:1 seunghak:2 lemma:3 called:4 ece:1 experimental:3 la:3 formally:3 support:9 latter:1 outstanding:1 |
4,303 | 4,895 | Learning with Invariance via Linear Functionals on
Reproducing Kernel Hilbert Space
Yee Whye Teh
Wee Sun Lee
Xinhua Zhang
Machine Learning Research Group Department of Computer Science Department of Statistics
National ICT Australia and ANU National University of Singapore University of Oxford
[email protected]
[email protected]
[email protected]
Abstract
Incorporating invariance information is important for many learning problems. To
exploit invariances, most existing methods resort to approximations that either
lead to expensive optimization problems such as semi-definite programming, or
rely on separation oracles to retain tractability. Some methods further limit the
space of functions and settle for non-convex models. In this paper, we propose a
framework for learning in reproducing kernel Hilbert spaces (RKHS) using local
invariances that explicitly characterize the behavior of the target function around
data instances. These invariances are compactly encoded as linear functionals
whose value are penalized by some loss function. Based on a representer theorem that we establish, our formulation can be efficiently optimized via a convex
program. For the representer theorem to hold, the linear functionals are required
to be bounded in the RKHS, and we show that this is true for a variety of commonly used RKHS and invariances. Experiments on learning with unlabeled data
and transform invariances show that the proposed method yields better or similar
results compared with the state of the art.
1
Introduction
Invariances are among the most useful prior information used in machine learning [1]. In many
vision problems such as handwritten digit recognition, detectors are often supposed to be invariant
to certain local transformations, such as translation, rotation, and scaling [2, 3]. One way to utilize
this invariance is by assuming that the gradient of the function is small along the directions of transformation at each data instance. Another important scenario is semi-supervised learning [4], which
relies on reasonable priors over the relationship between the data distribution and the discriminant
function [5, 6]. It is commonly assumed that the function does not change much in the proximity of
each observed data instance, which reflects the typical clustering structure in the data set: instances
from the same class are clustered together and away from those of different classes [7?9]. Another
popular assumption is that the function varies smoothly over the graph Laplacian [10?12].
A number of existing works have established a mathematical framework for learning with invariance. Suppose T (x, ?) transforms a data point x by an operator T with a parameter ? (e.g. T for
rotation and ? for the degree of rotation). Then to incorporate invariance, the target function f is
assumed to be (almost) invariant over T (x, ?) := {T (x, ?) : ? ? ?}, where ? controls the locality of invariance. The consistency of this framework was shown by [13] in the context of robust
optimization [14]. However, in practice it usually leads to a large or infinite number of constraints,
and hence tractable formulations inevitably rely on approximating or restricting the invariance under
consideration. Finally, this paradigm gets further complicated when f comes from a rich space of
functions, e.g. the reproducing kernel Hilbert space (RKHS) induced by universal kernels.
In [15], all perturbations within the ellipsoids around instances are treated as invariances. This led to
a second order cone program, which is difficult to solve efficiently. In [16], a discrete set of ? that
corresponds to feature deletions [17] are considered. The problem is reduced to a quadratic program,
but at the cost of blowing up the number of variables which makes scaling to large problems chal1
lenging. A one step approximation of T (x, ?) via the even-order Taylor expansions around ? = 0
is used in [18]. This results in a semi-definite programming, which is still hard to solve. A further
simplification was introduced by [19], which performed sparse approximations of T (x, ?) by finding (via an oracle) the most violating instance under the current solution. Besides yielding a cheap
quadratic program at each iteration, it also improved upon the Virtual Support Vector approach in
[20], which did not have a clear optimization objective despite a similar motivation of sparse approximation. However, tractable oracles are often unavailable, and so a simpler approximation can
be performed by merely enforcing the invariance of f along some given directions, e.g. the tangent
?
direction ??
|?=0 T (x, ?). This idea was used in [21] in a nonlinear RKHS, but their direction of perturbation was not in the original space but in the RKHS. By contrast, [3] did penalize the gradient
of f in the original feature space, but their function space was limited to neural networks and only
locally optimal solutions were found.
The goal of our paper, therefore, is to develop a new framework that: (1) allows a variety of invariances to be compactly encoded over a rich family of functions like RKHS, and (2) allows the search
of the optimal function to be formulated as a convex program that is efficiently solvable. The key
requirement to our approach is that the invariances can be characterized by linear functionals that
are bounded (? 2). Under this assumption, we are able to formulate our model into a standard regularized risk minimization problem, where the objective consists of the sum of loss functions on these
linear functionals, the usual loss functions on the labeled training data, and a regularization penalty
based on the RKHS norm of the function (? 3). We give a representer theorem that guarantees that
the cost can be minimized by linearly combining a finite number of basis functions1 . Using convex
loss, the resulting optimization problem is a convex program, which can be efficiently solved in a
batch or online fashion (? 5). Note [23] also proposed an operator based model for invariance, but
did not derive a representer theorem and did not study the empirical performance.
We also show that a wide range of commonly used invariances can be encoded as bounded linear
functionals. This include derivatives, transformation invariances, and local averages in commonly
used RKHSs such as those defined by Gaussian and polynomial kernels (? 4). Experiment show that
the use of some of these invariances within our framework yield better or similar results compared
to the state of the art.
Finally, we point out that our focus is to find a function in a given RKHS which respects the prespecified invariances. We are not constructing kernels that instantiate the invariance, e.g. [24, 25].
2
Preliminaries
Suppose features of training examples lie in a domain X. A function k : X ? X ? R is called a
positive semi-definite kernel (or simply kernel) if for all l ? N and all x1 , . . . , xl ? X, the l ? l Gram
matrix K := (k(xi , xj ))ij is symmetric positive semi-definite. Example kernels on Rn ?Rn include
polynomial kernel of degree r, which are defined as k(x1 , x2 ) = (x1 ? x2 + 1)r ,2 as well as Gaus2
sian kernels, defined as k(x1 , x2 ) = ?? (x1 , x2 ) where ?? (x1 , x2 ) := exp(? kx1 ? x2 k /(2? 2 )).
More comprehensive introductions to kernels are available in, e.g., [26?28].
Given k, let H0 be the set of all finite linear
of functions in {k(x, ?) : P
x ? X}, and enPpcombinations
Pq
p
dow on H0 an inner product as hf, gi = i=1 j=1 ?i ?j k(xi , yj ) where f (?) = i=1 ?i k(xi , ?)
Pq
and g(?) = j=1 ?j k(yj , ?). Note hf, gi is invariant to the form of expansion of f and g [26, 27].
Using the positive semi-definite properties of k, it is easy to show that H0 is an inner product space,
and we call its completion under h?, ?i as a reproducing kernel Hilbert space (RKHS) H induced by k.
2
For any f ? H, the reproducing property implies f (x) = hf, k(x, ?)i and we denote kf k := hf, f i.
2.1
Operators and Representers
Definition 1 (Bounded Linear Operator and Functional). A linear operator T is a mapping from a
vector space V to a vector space W, such that T (x + y) = T x + T y and T (?x) = ? ? T x for all
x, y ? V and scalar ? ? R. T is also called a functional if W = R. In the case that V and W are
normed, T is called bounded if c := supx?V,kxkV =1 kT xkW is finite, and we call c as the norm of
the operator, denoted by kT k.
1
2
A similar result was provided in [22], but not in the context of learning with invariance.
We write a variable in boldface if it is a vector in a Euclidean space.
2
Example 1. Let H be an RKHS induced by a kernel k defined on X ? X. Then for any x ? X, the
linear functional T : H ? R defined as T (f ) := f (x) is bounded since |f (x)| = |hf, k(x, ?)i| ?
kk(x, ?)k ? kf k = k(x, x)1/2 kf k by the Cauchy-Schwarz inequality.
Boundedness of linear functionals is particularly useful thanks to the Riesz representation theorem
which establishes their one-to-one correspondence to V [29].
Theorem 1 (Riesz representation Theorem). Every bounded linear functional L on a Hilbert space
V can be represented in terms of an inner product L(x) = hx, zi for all x ? V, where the representer
of the functional, z ? V, has norm kzk = kLk and is uniquely determined by L.
Example 2. Let H be the RKHS induced by a kernel k. For any functional L on H, the representer
z can be constructed as z(x) = hz, k(x, ?)i = L(k(x, ?)) for all x ? X. By Theorem 1, z ? H.
Using Riesz?s representer theorem, it is not hard to show that for any bounded linear operator T :
V ? V where V is Hilbertian, there exists a unique bounded linear operator T ? : V ? V such that
hT x, yi = hx, T ? yi for all x, y ? V. T ? is called the adjoint operator. So continuing Example 2:
Example 3. Suppose the functional L has the form L(f ) = T (f )(x0 ), where x0 ? X and T : H ?
H is a bounded linear operator on H. Then the representer of L is z = T ? (k(x0 , ?)) because
? x ? X, z(x) = L(k(x, ?)) = T (k(x, ?))(x0 ) = hT (k(x, ?)), k(x0 , ?)i = hk(x, ?), T ? (k(x0 , ?))i. (1)
Riesz?s theorem will be useful for our framework since it allows us to compactly represent functionals related to local invariances as elements of the RKHS.
3
Regularized Risk Minimization in RKHS with Invariances
To simplify the presentation, we first describe our framework in the settings of semi-supervised
learning [SSL, 4], and later show how to extend it to other learning scenarios in a straightforward
way. In SSL, we wish to learn a target function f both from labeled data and from local invariances
extracted from labeled and unlabeled data. Let (x1 , y1 ), . . . , (xl , yl ) be the labeled training data,
and `1 (f (x), y) be the loss function on f when the training input x is labeled as y. In this paper we
restrict `1 to be convex in its first argument, e.g. logistic loss, hinge loss, and squared loss.
We measure deviations from local invariances around each labeled or unlabeled input instance, and
express them as bounded linear functionals Ll+1 (f ), . . . , Ll+m (f ) on the RKHS H. The linear
functionals are associated with another convex loss function `2 (Li (f )) penalizing violations of the
local invariances. As an example, the derivative of f with respect to an input feature at some training
instance x is a linear functional in f , and the loss function penalizes large values of the derivative
at x using, e.g., squared loss, absolute loss, and -insensitive loss. Section 4 describes other local
invariances we can consider and shows that these can be expressed as bounded linear functionals.
Finally, we penalize the complexity of f via the squared RKHS norm kf k2 . Putting together the
loss and regularizer, we set out to minimize the regularized risk functional over f ? H:
l
l+m
X
X
1
min kf k2 + ?
`1 (f (xi ), yi ) + ?
`2 (Li (f )) ,
(2)
f ?H 2
i=1
i=l+1
where ?, ? > 0. By the convexity of `1 and `2 , (2) must be a convex optimization problem. However
it is still in the function space and involves functionals. In order to derive an efficient optimization
procedure, we now derive a representer theorem showing that the optimal solution lies in the span of
a finite number of functions associated with the labeled data and the representers of the functionals
Li . Similar results are available in [22].
Theorem 2. Let H be the RKHS defined by the kernel k. Let Li (i = l + 1, . . . , l + m) be bounded
linear functionals on H with representers zi . Then the optimal solution to (2) must be in the form of
l
l+m
X
X
g(?) =
?i k(xi , ?) +
?i zi (?)
(3)
i=1
i=l+1
Furthermore, the parameters ? = (?1 , . . . , ?l+m )0 (finite dimensional) can be found by minimizing
l
l+m
l
l+m
X
X
X
X
1
?
`1 (hk(xi , ?), f i , yi ) + ?
`2 (hzi , f i) + ?0 K?, where f =
?i k(xi , ?) +
?i zi . (4)
2
i=1
i=1
i=l+1
i=l+1
Here Kij = hk?i , k?j i, where k?i = k(xi , ?) if i ? l, and k?i = zi (?) otherwise. (Proof is in Appendix A.)
3
Theorem 2 is similar to the results in [18, Proposition 3] and [19, Eq 8], where the optimal function
lies in the span of a finite number of representers. However, our model is quite different in that it
uses the representers of the linear functionals corresponding to the invariance, rather than virtual
samples drawn from the invariant neighborhood. This could result in more compact models because
the invariance (e.g. rotation) is enforced by a single representer, rather than multiple virtual examples
(e.g. various degrees of rotation) drawn from the trajectory of invariant transforms. By the expansion
of f in (4), the labeling of a new instance x depends not only on k(x, xi ) that often measures the
similarity between x and training examples, but also takes into account the extent to which k(x, ?),
as a function, conforms to the prior invariances.
Computationally, hk(xi , ?), zj i is straightforward based on the definition of Lj . The efficient computation of hzi , zj i depends on the specific kernels and invariances, as we will show in Section
4. In the simplest case, consider the commonly used graph LaplacianPregularizer [10, 11] which,
given the similarity measure wij between xi and xj , can be written as ij wij (f (xi ) ? f (xj ))2 =
P
2
ij wij (Lij (f )) , where Lij (f ) = hf, k(xi , ?) ? k(xj , ?)i is linear and bounded. Then hzij , zpq i =
k(xi , xp ) + k(xj , xq ) ? k(xj , xp ) ? k(xi , xq ). Another generic approach is to use the assumption
in Example 3 that hzs , f i = Ls (f ) = Ts (f )(xs ), ?s ? {i, j}. Then
hzi , zj i = Li (zj ) = Ti (zj )(xi ) = hTi (zj ), k(xi , ?)i = hzj , Ti? (k(xi , ?))i = Tj (Ti? (k(xi , ?)))(xj ). (5)
In practice, classifiers such as the support vector machine often use an additional constant term (bias)
that is not penalized in the optimization. This is equivalent to searching f over F + H, where F is a
finite set of basis functions. A similar representer theorem can be established (see Appendix A).
4
Local Invariances as Bounded Linear Functionals
Interestingly, many useful local invariances can be modeled as bounded linear functionals. If it can
be expressed in terms of function values f (x), then it must be bounded as shown in Example 1.
In general, boundedness hinges on the functional and the RKHS H. When H is finite dimensional,
such as that induced by linear or polynomial kernels, all linear functionals on H must be bounded:
Theorem 3 ([29, Thm 2.7-8]). Linear functionals on finite dimensional normed space are bounded.
However, in most nonparametric statistics problems that are of interest, the RKHS is infinite dimensional. So the boundedness requires a more refined analysis depending on the specific functional.
4.1
Differentiation Functional
In semi-supervised learning, a common prior is that the discriminant function f does not change
rapidly around sampled points. Therefore, we expect the norm of the gradient at these locations is
small. Suppose X ? Rn is an open set, and k is continuously differentiable on X2 . Then we are
(x)
interested in linear functionals Lxi ,d (f ) := ?f
|
, where xd stands for the d-th component of
?xd x=xi
the vector x. Then Lxi ,d must be bounded:
Theorem 4. Lxi ,d is bounded on H with respect to the RKHS norm.
Proof. This result is immediate from the inequality given by [28, Corollary 4.36]:
21
?2
?
? kf k
.
f
(x)
k(x,
y)
?xd x=xi
?xd ?y d x=y=xi
Let us denote the representer of Lxi ,d as zi,d . Indeed, [28, Corollary 4.36] established the same
result for higher order partial derivatives, which can be easily used in our framework as well.
The inner product between representers can be computed by definition:
?
?
? 2
hzi,d , zj,d0 i =
z
(y)
=
hz
,
k(y,
?)i
=
k(x, y). (6)
i,d
i,d
?y d0 y=xj
?y d0 y=xj
?y d0 ?xd x=xi ,y=xj
If k is considered as a function on (x, y) ? R2n , this implies that the inner product hzi,d , zj,d0 i is the
0
(xd , y d )-th element of the Hessian of k evaluated at (xi , xj ), which could be interpreted as some
sort of ?covariance? between the two invariances with respect to the touchstone function k.
Applying (6) to the polynomial kernel k(x, y) = (hx, yi + 1)r , we derive
0
hzi,d , zj,d0 i = r(hxi , xj i + 1)r?2 [(r ? 1)xdi xdj + (hxi , xj i + 1)?d=d0 ],
4
(7)
where ?d=d0 = 1 if d = d0 , and 0 otherwise. For Gaussian kernels k(x, y) = ?? (x, y), we can take
?f
another path that is different from (6). Note that Lxi ,d (f ) = T (f )(xi ) where T : f 7? ?x
d , and it
is straightforward to verify that T is bounded with T ? = ?T for Gaussian RKHS. So applying (5),
0
0
?
k(xi , xj ) 2
?
0
hzi,d , zj,d i =
? d k(xi , y) =
[? ?d=d0 ? (xdi ? xdj )(xdi ? xdj )]. (8)
?y d0 y=xj
?y
?4
p
By Theorem 1, it immediately follows that the norm of Lxi ,d is hzi,d , zi,d i = 1/?.
4.2
Transformation Invariance
Invariance to known local transformations of input has been used successfully in supervised learning
[3]. Here we show that transformation invariance can be handled in our framework via representers
in RKHS. In particular, gradients with respect to the transformations are bounded linear functionals.
Following [3], we first require a differentiable function g that maps points from a space S to R,
where S lies in a Euclidean space.3 For example, an image can be considered as a function g that
maps points in the plane S = R2 to the intensity of the image at that point. Next, we consider a
family of bijective transformations t? : S 7? S, which is differentiable in both the input and the
(1)
parameter ?. For instance, translation, rotation, and scaling can be represented as mappings t? ,
(3)
(2)
t? , and t? respectively:
(1)
(2)
(3)
x t?
x + ?x
x t?
x cos ? ? y sin ?
x t?
x + ?x
7??
,
7??
,
7??
.
y
y + ?y
y
x sin ? ? y cos ?
y
y + ?y
Based on t? , we define a family of operators T? : RS ? RS as T? (g) = g ? t?1
? . The
function T? (g)(x, y) gives us the intensity at location (x, y) of the image translated by an offset (?x , ?y ), rotated by an angle ?, or scaled by an amount ?. Finally we sample from S a
fixed number of locations S := {s1 , . . . , sq }, and present to the learning algorithm a vector
I(g, ?; S) := (T? (g)(s1 )0 , . . . , T? (g)(sq )0 )0 . Digital images are discretization of real images where
we sample at fixed pixel locations of the function g to obtain a fixed sized vector. Clearly, for a fixed
g, the sampled observation I(g, ?; S) is a vector valued function of ?.
The following result allows our framework to use derivatives with respect to the parameters in ?.
Theorem 5. Let F be a normed vector space of functions that map from the range of I(g, ?; S) to R.
Suppose the linear functional that maps f ? F to ?f?u(u)
is bounded for any u0 and coordinate
j
u=u0
?
j, and its norm is denoted as Cj . Then the functional Lg,d,S : f 7? ??
d ?=0 f (I(g, ?; S)), i.e.
derivatives with respect to each of the components in ?, must be bounded linear functionals on F.
Proof. Let I j (g, ?; S) be the j-th component of I(g, ?; S). Using the chain rule, for any f ? F
q
j
?f (I(g, ?; S))
X ?f (u)
?I (g, ?; S)
=
|Lg,d,S (f )| =
?
d
j
d
??
??
j=1 ?u u=I(g,0;S)
?=0
?=0
j
j
q
q
X
X
?I (g, ?; S)
?I (g, ?; S)
= kf k ?
.
(by definition of kf k) ?
(Cj kf k) ?
C
j
d
d
??
??
?=0
?=0
j=1
j=1
The proof is completed by noting that the last summation is a finite constant independent of f .
?
Corollary 1. The derivatives ??
f (I(g, ?; S)) with respect to each of the component in ? are
d
?=0
bounded linear functionals on the RKHS defined by the polynomial and Gaussian kernels.
To compute the inner product between representers,Dlet zg,d,S be the representer ofELg,d,S and denote
?
|?=0 . Then hk(x, ?), zg,d,S i = ?y
k(x, y), vg,d,S and
vg,d,S = ?I(g,?;S)
??d
y=I(g,0,S)
?
?
0
0
0
0
0
0
hzg,d,S , zg ,d ,S i = vg,d,S ,
vg ,d ,S ,
k(x, y)
. (9)
?x x=I(g,0,S)
?y y=I(g0 ,0,S 0 )
3
In practice, S can be a discrete domain such as the pixel coordinate of an image. Then g can be extended
to an interpolated continuous space via convolution with a Gaussian. See [3, ? 2.3] for more details.
5
4.3
Local Averaging
Using gradients to enforce the local invariance that the target function does not change much around
data instances increases the number of basis functions by a factor of n, where n is the number of
gradient directions that we use. The optimization problem can become computationally expensive if
n is large. When we do not have useful information about the invariant directions, it may be useful
to have methods that do not increase theZnumber of basis functions by much. Consider functionals
Lxi (f ) =
f (? )p(xi ? ? )d? ? f (xi ),
(10)
X
where p(?) is a probability density function centered at zero. Minimizing a loss with such linear
functionals will favor functions whose local averages given by the integral are close to the function
values at data instances. If p(?) is selected to be a low pass filter, the function should be smoother
and less likely to change in regions with more data points but is less constrained to be smooth in
regions where the data points are sparse. Hence, such loss functions may be appropriate when we
believe that data instances from the same class are clustered together.
To use the framework we have developed, we need to select the probability density p(?) and the
kernel k such that Lxi (f ) is a bounded linear functional.
Theorem 6. Assume there is a constant C > 0 such that k(x, x) ? C 2 for all x ? X. Then the
linear functional Lxi in (10) is bounded in the RKHS defined by k for any probability density p(?).
See proof in Appendix B. As a result, radial kernels such as Gaussian and exponential kernels make
Lxi bounded. k(x, x) is not bounded for polynomial kernel, but it has been covered by Theorem 3.
To allow for efficient implementation, the inner product between representers of invariances must
be computed efficiently. Unlike differentiation, integration often does not result in a closed form
expression. Fortunately, analytic evaluation is feasible
for the Gaussian kernel ?? (xi , xj ) together
?
with the Gaussian density function p(x) = (? 2?)?n ?? (x, 0), because the convolution of two
Gaussian densities is still a Gaussian density (see derivation in Appendix B):
hzxi , zxj i = ?? (xi , xj ) + (1 + 2?/?)?n ??+2? (xi , xj ) ? 2(1 + ?/?)?n ??+? (xi , xj ).
5
Optimization
The objective (2) can be optimized by many algorithms, such as stochastic gradient [30], bundle
method [19, 31], and (randomized) coordinate descent in its dual (4) [32]. Since all computational
strategies rely on kernel evaluation, we prefer the dual approach. In particular, (4) allows an unconstrained optimization and the nonsmooth regions in `1 or `2 can be easily approximated by smooth
surrogates [33]. So without loss of generality, we assume `1 and `2 are smooth. Our approach can
work in both batch and coordinate-wise, depending on the scale of different problems.
In the batch setting, the major challenge is the cost in computing the gradient when the number
of invariance is large. For example, consider all derivatives at all labeled examples x1 , . . . , xl and
let N = (hzi,d0 , zj,d0 i)(i,d),(j,d0 ) ? Rnl?nl as in (8). Then given ? = (?01 , . . . , ?0l )0 ? Rnl , the
bottleneck of computation is g := N ?, which costs O(l2 n2 ) time and O(l2 n2 ) space to store N in
a vanilla implementation. However, since the kernel matrices often employ rich structures, a careful
treatment can reduce the cost by an order of magnitude, e.g. into O(l2 n) time and O(nl+l2 ) space in
this case. Specifically, denote K = (k(xi , xj ))ij ? Rl?l , and three n?l matrices X = (x1 , . . . , xl ),
A = (?1 , . . . , ?l ), and G = (gd,i ). Then one can show
G = ? ?4 ? 2 AK + XQ ? X ? (1n ? 10l Q) , where Qij = Kij (?ji ? ?ii ), ? = X 0 A. (11)
Here ? stands for Hadamard product, and ? is Kronecker product. 1n ? Rn is a vector of straight
one. The computational cost is dominated by X 0 A, AK, and XQ, which are all O(l2 n).
When the number of invariance is huge, a batch solver can be slow and coordinate-wise updates can
be more efficient. In each iteration, it picks a coordinate in ? and optimizes the objective over all the
coordinates picked so far, leaving the rest elements to zero. [32] selected the coordinate randomly,
while another strategy is to choose the steepest descent coordinate. Clearly, it is useful only when
this selection can be performed efficiently, which depends heavily on the structure of the problem.
6
Experimental Result
We compared our approach, which is henceforth referred to as InvSVM,with state-of-the-art methods
6
1.2
1.2
1.2
1
1
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
0
?0.2
?0.2
?0.2
?0.2
?0.4
?0.4
?0.4
?0.4
?0.6
?0.6
?1
?0.5
0
0.5
1
1.5
2
2.5
1.2
?0.6
?1
?0.5
0
0.5
1
1.5
2
2.5
?0.6
?1
?0.5
0
0.5
1
1.5
2
2.5
?1
?0.5
0
0.5
1
1.5
2
2.5
Figure 1: Decision boundary for the two moon dataset with ? = 0, 0.01, 0.1, and 1 (left to right).
in (semi-)supervised learning using invariance to differentiation and transformation.
6.1
Invariance to Differentiation: Transduction on Two Moon Dataset
As a proof of concept, we experimented with the ?two moon? dataset shown in Figure 1, with
only l = 2 labeled data instances (red circle and black square). We used the Gaussian kernel with
? = 0.25 and the gradient invariances on both labeled and unlabeled data. The loss `1 and `2 are
logistic and squared loss respectively, with ? = 1, and ? ? {0, 0.01, 0.1, 1}. (4) was minimized
by a L-BFGS solver [34]. In Figure 1, from left to right our method lays more and more emphasis
on placing the separation boundary in low density regions, which allows unlabeled data to improve
the classification accuracy. We also tried hinge loss for `1 and -insensitive loss for `2 with similar
classification results.
6.2
Invariance to Differentiation: Semi-supervised Learning on Real-world Data.
Datasets. We used 9 datasets for binary classification from [4] and the UCI repository [35]. The
number of features (n) and instances (t) are given in Table 1. All feature vectors were normalized
to zero mean and unit length.
Algorithms. We trained InvSVM with hinge loss for `1 and squared loss for `2 . The differentiation
invariance was used over the whole dataset, i.e. m = nt in (4), and the gradient computation was
accelerated by (11). We compared InvSVM with the standard SVM, and a state-of-the-art semisupervised learning algorithm LapSVM [36], which uses manifold regularization based on graph
Laplacian [11, 37]. All three algorithms used Gaussian kernel with the bandwidth ? set to be the
median of pairwise distance among all instances.
Settings. We trained SVM, LapSVM, and InvSVM on a subset of l ? {30, 60, 90} labeled examples, and compared their test error on the other t ? l examples in each dataset. We used 5 fold
stratified cross validation (CV) to select the values of ? and ? for InvSVM. CV was also applied
to LapSVM for choosing the number of nearest neighbor for graph construction, weight on the
Laplacian regularizer, and the standard RKHS norm regularizer. For each fold of CV, InvSVM and
LapSVM used the other 4 folds ( 54 l points) as labeled data, and the remaining t ? 45 l points as unlabeled data. The error on the 51 l points was then used for CV. Finally the random selection of l labeled
examples was repeated for 10 times, and we reported the mean test error and standard deviation.
Results. It is clear from Table 1 that in most cases InvSVM achieves lower or similar test error compared to SVM and LapSVM. Both LapSVM and InvSVM are implementations of the low
density prior. To this end, LapSVM enforces smoothness of the discrimination function over neighboring instances, while InvSVM directly penalizes the gradient at instances and does not require a
notion of neighborhood. The lower error of InvSVM suggests the superiority of the use of gradient,
which is enabled by our representer based approach. Besides, SVM often performs quite well when
the number of labeled data is large, with similar error as LapSVM. But still, InvSVM can attain
even lower error.
6.3
Transformation Invariance
Next we study the use of transformation invariance for supervised learning. We used the handwritten
digits from the MNIST dataset [38] and compared InvSVM with the virtual sample SVM (VirSVM)
which constructs additional instances by applying the following transformations to the training data:
2-pixel shifts in 4 directions, rotations by ?10 degrees, scaling by ?0.1 unit, and shearing in vertical
?
|?=0 I(g, ?; S) was approximated
or horizontal axis by ?0.1 unit. For InvSVM, the derivative ??
with the difference that results from the above transformation. We considered binary classification
problems by choosing four pairs of digits (4-vs-9, 2-vs-3, and 6-vs-5 are hard, while 7-vs-1 is
7
Table 1: Test error of SVM, LapSVM, and InvSVM for semi-supervised learning. The best result
(including tie) that is statistically significant in each setting is highlighted in bold. No number is
highlighted if there is no significant difference between the three methods.
heart (n = 13, t = 270)
BCI (n = 117, t = 400)
bupa (n = 6, t = 245)
l = 30
l = 60
l = 90
l = 30
l = 60
l = 90
l = 30
l = 60
l = 90
SVM
23.5?2.08 21.4?0.23 20.0?1.11 44.6?1.89 39.0?3.41 34.6?3.25 36.2?5.53 37.9?5.80 36.9?1.80
LapSVM 23.2?1.68 22.7?1.92 20.7?2.10 44.8?2.72 45.4?2.25 36.1?3.92 36.6?5.98 40.7?4.05 37.1?1.58
InvSVM 22.3?1.27 20.2?1.01 19.6?2.79 45.3?3.59 38.4?4.41 32.7?2.27 38.2?4.46 35.4?1.85 35.2?0.45
Method
g241c (n = 241, t = 1500)
g241n (n = 241, t = 1500)
Australian (n = 14, t = 690)
l = 30
l = 60
l = 90
l = 30
l = 60
l = 90
l = 30
l = 60
l = 90
SVM
32.9?1.59 27.1?0.92 24.8?1.99 34.6?2.52 28.3?2.65 25.6?1.48 20.6?4.18 21.7?11.3 15.3?0.86
LapSVM 37.1?1.25 28.4?2.44 29.7?1.57 37.7?5.76 29.9?2.41 25.9?1.49 21.9?9.27 15.6?2.49 14.7?0.16
InvSVM 33.1?0.49 26.4?1.14 23.4?1.36 35.4?1.57 28.4?3.03 22.3?4.69 17.7?1.74 16.4?1.11 15.6?0.77
ionosphere (n = 34, t = 351)
sonar (n = 60, t = 208)
USPS (n = 241, t = 1500)
l = 30
l = 60
l = 90
l = 30
l = 60
l = 90
l = 30
l = 60
l = 90
SVM
12.5?3.12 8.71?1.62 7.17?1.67 30.9?2.53 22.7?0.39 20.6?4.81 14.9?0.26 12.6?2.04 11.3?2.06
LapSVM 14.9?1.73 9.05?1.05 7.66?1.53 29.4?2.33 22.9?0.17 24.9?1.29 15.3?1.10 12.3?1.74 11.1?1.81
InvSVM 7.58?1.29 7.90?0.23 7.02?0.88 31.6?4.68 24.1?2.06 21.8?3.91 15.4?4.02 12.2?2.93 11.3?1.63
5
5
10
15
Error of VirSVM
(a) 4 (pos) vs 9 (neg)
7
6
5
4
4
6
8
Error of VirSVM
Error of InvSVM
10
2
10
8
Error of InvSVM
Error of InvSVM
Error of InvSVM
15
8
6
4
5
10
Error of VirSVM
(b) 2 (pos) vs 3 (neg)
(c) 6 (pos) vs 5 (neg)
1.5
1
0.5
0.5
1
1.5
2
Error of VirSVM
(d) 7 (pos) vs 1 (neg)
Figure 2: Test error (in percentage) of InvSVM versus SVM with virtual sample.
easier). As the real distribution of digit is imbalanced and invariance is more useful when the
number of labeled data is low, we randomly chose n+ = 50 labeled images for one class and
n? =
the supervised loss was normalized within each class:
P10 images for the other.?1Accordingly
P
n?1
`
(f
(x
),
1)
+
n
`
i
+
?
i:yi =1 1
i:yi =?1 1 (f (xi ), ?1). Logistic loss and -insensitive loss were
used for `1 and `2 respectively. All parameters were set by 5 fold CV and the test error was measured
on the rest images in the dataset. The whole process was repeated for 20 times.
In Figure 2, InvSVM generally yields lower error than VirSVM, which suggests that compared
with drawing virtural samples from the invariance, it seems more effective to directly enforce a flat
gradient like in [3].
7
Conclusion and Discussion
We have shown how to model local invariances by using representers in RKHS. This subsumes a
wide range of invariances that are useful in practice, and the formulation can be optimized efficiently.
For future work, it will be interesting to extend the framework to a broader range of learning tasks.
A potential application is to the problem of imbalanced data learning, where one wishes to keep
the decision boundary further away from instances of the minority class and closer to the instances
of majority class. It will also be interesting to generalize the framework to invariances that are not
Rb
directly linear. For example, the total variation `(f ) := a |f 0 (x)| dx (a, b ? R) is not linear, but
we can treat the integral of absolute value as a loss function and take the derivative as the linear
Rb
functional: Lx (f ) = f 0 (x) and `(f ) = a |Lx (f )| dx. As a result, the optimization may have to
resort to sparse approximation methods. Finally the norm of the linear functional does affect the
optimization efficiency, and a detailed analysis will be useful for choosing the linear functionals.
8
References
[1] G. E. Hinton. Learning translation invariant recognition in massively parallel networks. In Proceedings
Conference on Parallel Architectures and Laguages Europe, pages 1?13. Springer, 1987.
[2] M. Ferraro and T. M. Caelli. Lie transformation groups, integral transforms, and invariant pattern recognition. Spatial Vision, 8:33?44, 1994.
[3] P. Simard, Y. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognitiontangent distance and tangent propagation. In Neural Networks: Tricks of the Trade, pages 239?274, 1996.
[4] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006.
[5] S. Ben-David, T. Lu, and D. Pal. Does unlabeled data provably help? Worst-case analysis of the sample
complexity of semi-supervised learning. In COLT, 2008.
[6] T. Zhang and F. J. Oles. A probability analysis on the value of unlabeled data for classification problems.
In ICML, 2000.
[7] O. Bousquet, O. Chapelle, and M. Hein. Measure based regularization. In NIPS, 2003.
[8] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999.
[9] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML, 2001.
[10] M. Belkin, P. Niyogi, and V. Sindhwani. On manifold regularization. In AI-Stats, 2005.
[11] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In ICML, 2003.
[12] D. Zhou and B. Sch?olkopf. Discrete regularization. In Semi-Supervised Learning, pages 221?232. MIT
Press, 2006.
[13] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal
of Machine Learning Research, 10:3589?3646, 2009.
[14] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, 2008.
[15] C. Bhattacharyya, K. S. Pannagadatta, and A. J. Smola. A second order cone programming formulation
for classifying missing data. In NIPS, 2005.
[16] A. Globerson and S. Roweis. Nightmare at test time: Robust learning by feature deletion. In ICML, 2006.
[17] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. Adversarial classification. In KDD, 2004.
[18] T. Graepel and R. Herbrich. Invariant pattern recognition by semidefinite programming machines. In
NIPS, 2004.
[19] C. H. Teo, A. Globerson, S. Roweis, and A. Smola. Convex learning with invariances. In NIPS, 2007.
[20] D. DeCoste and B. Sch?olkopf. Training invariant support vector machines. Machine Learning, 46:161?
190, 2002.
[21] O. Chapelle and B. Sch?olkopf. Incorporating invariances in nonlinear support vector machines. In NIPS,
2001.
[22] G. Wahba. An introduction to model building with reproducing kernel Hilbert spaces. Technical Report
TR 1020, University of Wisconsin-Madison, 2000.
[23] A. J. Smola and B. Sch?olkopf. On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica, 22:211?231, 1998.
[24] C. Walder and O. Chapelle. Learning with transformation invariant kernels. In NIPS, 2007.
[25] C. Burges. Geometry and invariance in kernel based methods. In B. Sch?olkopf, C. Burges, and A. Smola,
editors, Advances in Kernel Methods ? Support Vector Learning, pages 89?116. MIT Press, 1999.
[26] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, 2001.
[27] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-based
Learning Methods. Cambridge University Press, Cambridge, UK, 2000.
[28] I. Steinwart and A. Christmann. Support Vector Machines. Information Science and Statistics. Springer,
2008.
[29] E. Kreyszig. Introductory Functional Analysis with Applications. Wiley, 1989.
[30] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In
ICML, 2007.
[31] C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. Journal of Machine Learning Research, 11:311?365, January 2010.
[32] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[33] O. Chapelle. Training a support vector machine in the primal. Neural Comput., 19(5):1155?1178, 2007.
[34] http://www.cs.ubc.ca/?pcarbo/lbfgsb-for-matlab.html.
[35] K. Bache and M. Lichman. UCI machine learning repository, 2013. University of California, Irvine.
[36] http://www.dii.unisi.it/?melacci/lapsvmp.
[37] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised
learning. In ICML, 2005.
[38] http://www.cs.nyu.edu/?roweis/data.html.
9
| 4895 |@word repository:2 inversion:1 polynomial:6 norm:10 seems:1 open:1 r:2 tried:1 covariance:1 pick:1 tr:1 klk:1 boundedness:3 lichman:1 rkhs:26 interestingly:1 bhattacharyya:1 existing:2 current:1 com:1 discretization:1 nt:1 dx:2 must:7 written:1 kdd:1 cheap:1 analytic:1 update:1 discrimination:1 v:8 instantiate:1 selected:2 accordingly:1 plane:1 steepest:1 prespecified:1 mannor:1 location:4 lx:2 herbrich:1 simpler:1 zhang:3 mathematical:1 along:2 constructed:1 become:1 rnl:2 qij:1 consists:1 introductory:1 dalvi:1 pairwise:1 x0:6 shearing:1 indeed:1 behavior:1 touchstone:1 decoste:1 solver:3 provided:1 bounded:30 interpreted:1 developed:1 finding:1 transformation:16 differentiation:6 guarantee:1 every:1 ti:3 xd:6 tie:1 k2:2 classifier:1 uk:2 control:1 scaled:1 unit:3 superiority:1 positive:3 local:15 treat:1 limit:1 despite:1 ak:2 oxford:1 path:1 black:1 chose:1 emphasis:1 au:1 suggests:2 co:2 bupa:1 limited:1 nemirovski:1 stratified:1 range:4 statistically:1 unique:1 lecun:1 enforces:1 yj:2 globerson:2 practice:4 oles:1 definite:5 caelli:1 digit:4 procedure:1 sq:2 universal:1 empirical:1 attain:1 lbfgsb:1 radial:1 get:1 pegasos:1 unlabeled:9 close:1 operator:12 selection:2 g241c:1 context:2 risk:4 applying:3 yee:1 www:3 equivalent:1 map:4 missing:1 straightforward:3 normed:3 convex:9 l:1 formulate:1 stats:2 immediately:1 rule:1 enabled:1 searching:1 notion:1 coordinate:10 variation:1 target:4 suppose:5 heavily:1 construction:1 programming:4 us:2 domingo:1 trick:1 element:3 expensive:2 recognition:5 particularly:1 approximated:2 lay:1 bache:1 labeled:17 observed:1 cloud:1 solved:1 worst:1 sanghai:1 region:4 sun:1 trade:1 convexity:1 complexity:2 xinhua:2 cristianini:1 nesterov:1 trained:2 upon:1 efficiency:2 basis:4 usps:1 compactly:3 translated:1 easily:2 po:4 represented:2 various:1 caramanis:1 regularizer:3 derivation:1 describe:1 effective:1 labeling:1 neighborhood:2 h0:3 refined:1 choosing:3 whose:2 encoded:3 quite:2 solve:2 valued:1 shalev:1 drawing:1 otherwise:2 bci:1 favor:1 statistic:3 gi:2 niyogi:2 transductive:2 transform:1 highlighted:2 online:1 differentiable:3 mausam:1 propose:1 product:9 neighboring:1 uci:2 combining:1 hadamard:1 rapidly:1 kx1:1 roweis:3 supposed:1 adjoint:1 olkopf:7 requirement:1 rotated:1 ben:2 help:1 derive:4 develop:1 ac:1 completion:1 depending:2 measured:1 nearest:1 ij:4 eq:1 c:2 involves:1 come:1 implies:2 riesz:4 australian:1 direction:7 christmann:1 filter:1 stochastic:1 centered:1 australia:1 settle:1 virtual:5 dii:1 require:2 hx:3 clustered:2 preliminary:1 proposition:1 summation:1 hold:1 proximity:1 around:6 considered:4 exp:1 mapping:2 major:1 achieves:1 schwarz:1 teo:2 establishes:1 successfully:1 reflects:1 minimization:3 zxj:1 clearly:2 mit:4 gaussian:13 rather:2 zhou:1 broader:1 corollary:3 focus:1 joachim:1 hk:5 contrast:1 adversarial:1 inference:1 el:1 lj:1 wij:3 interested:1 provably:1 pixel:3 among:2 dual:2 classification:7 denoted:2 hilbertian:1 g241n:1 colt:1 html:2 art:4 ssl:2 constrained:1 integration:1 spatial:1 field:1 construct:1 functions1:1 placing:1 icml:7 representer:14 future:1 minimized:2 nonsmooth:1 report:1 simplify:1 employ:1 belkin:2 randomly:2 wee:1 national:2 comprehensive:1 algorithmica:1 geometry:1 interest:1 huge:2 evaluation:2 violation:1 nl:2 yielding:1 semidefinite:1 primal:2 tj:1 bundle:2 chain:1 kt:2 integral:3 closer:1 partial:1 conforms:1 taylor:2 euclidean:2 continuing:1 penalizes:2 circle:1 hein:1 instance:21 kij:2 tractability:1 cost:6 deviation:2 subset:1 hzj:1 pal:1 characterize:1 xdi:3 reported:1 varies:1 supx:1 gd:1 thanks:1 density:8 randomized:1 siam:1 retain:1 lee:1 yl:1 together:4 continuously:1 squared:5 choose:1 hzi:9 henceforth:1 resort:2 derivative:10 simard:1 li:5 account:1 potential:1 bfgs:1 bold:1 subsumes:1 representers:10 explicitly:1 depends:3 performed:3 later:1 picked:1 closed:1 red:1 hf:6 sort:1 complicated:1 parallel:2 minimize:1 square:1 accuracy:1 moon:3 efficiently:7 yield:3 generalize:1 handwritten:2 lu:1 trajectory:1 comp:1 straight:1 detector:1 definition:4 associated:2 proof:6 sampled:2 irvine:1 dataset:7 treatment:1 popular:1 hilbert:6 cj:2 graepel:1 blowing:1 higher:1 supervised:14 violating:1 improved:1 formulation:4 evaluated:1 ox:1 generality:1 furthermore:1 smola:6 dow:1 horizontal:1 steinwart:1 nonlinear:2 propagation:1 logistic:3 believe:1 semisupervised:1 building:1 verify:1 true:1 concept:1 normalized:2 hence:2 regularization:6 ferraro:1 symmetric:1 ll:2 sin:2 uniquely:1 whye:1 bijective:1 performs:1 xkw:1 image:9 wise:2 consideration:1 harmonic:1 common:1 rotation:7 functional:19 rl:1 ji:1 insensitive:3 extend:2 significant:2 cambridge:2 cv:5 ai:1 smoothness:1 unconstrained:1 consistency:1 vanilla:1 shawe:1 pq:2 hxi:2 chapelle:5 europe:1 similarity:2 imbalanced:2 optimizes:1 scenario:2 kxkv:1 certain:1 store:1 massively:1 inequality:2 binary:2 yi:7 neg:4 p10:1 additional:2 fortunately:1 paradigm:1 semi:16 u0:2 multiple:1 smoother:1 zien:1 ii:1 d0:14 smooth:3 technical:1 characterized:1 cross:1 laplacian:3 regression:1 vision:2 iteration:2 kernel:37 represent:1 penalize:2 victorri:1 median:1 leaving:1 sch:7 rest:2 unlike:1 induced:5 hz:3 unisi:1 lafferty:1 call:2 noting:1 easy:1 variety:2 xj:20 affect:1 zi:7 architecture:1 restrict:1 bandwidth:1 wahba:1 inner:7 idea:1 reduce:1 shift:1 bottleneck:1 expression:1 handled:1 vishwanthan:1 penalty:1 hessian:1 matlab:1 useful:10 generally:1 clear:2 covered:1 detailed:1 transforms:3 nonparametric:1 amount:1 locally:1 simplest:1 reduced:1 http:3 percentage:1 zj:11 singapore:1 estimated:1 rb:2 discrete:3 write:1 express:1 group:2 key:1 putting:1 four:1 blum:1 drawn:2 penalizing:1 ht:2 utilize:1 pcarbo:1 graph:5 merely:1 cone:2 sum:1 enforced:1 angle:1 family:3 almost:1 reasonable:1 separation:2 decision:2 appendix:4 scaling:4 prefer:1 simplification:1 correspondence:1 fold:4 quadratic:2 oracle:3 constraint:1 kronecker:1 x2:7 flat:1 tal:1 dominated:1 interpolated:1 bousquet:1 argument:1 min:1 span:2 department:2 leews:1 describes:1 s1:2 invariant:11 ghaoui:1 heart:1 computationally:2 singer:1 tractable:2 end:1 available:2 denker:1 away:2 generic:1 enforce:2 appropriate:1 chawla:1 batch:4 rkhss:1 lxi:10 robustness:1 original:2 clustering:1 include:2 remaining:1 completed:1 hinge:4 madison:1 exploit:1 ghahramani:1 establish:1 approximating:1 objective:4 g0:1 strategy:2 usual:1 surrogate:1 gradient:14 distance:2 majority:1 manifold:2 cauchy:1 discriminant:2 extent:1 nicta:1 enforcing:1 boldface:1 assuming:1 minority:1 besides:2 length:1 modeled:1 relationship:1 ellipsoid:1 kk:1 minimizing:2 difficult:1 lg:2 implementation:3 teh:2 vertical:1 observation:1 convolution:2 datasets:2 finite:10 walder:1 descent:3 inevitably:1 t:1 january:1 immediate:1 extended:1 hinton:1 y1:1 rn:4 perturbation:2 reproducing:6 thm:1 intensity:2 introduced:1 david:1 pair:1 required:1 optimized:3 california:1 deletion:2 established:3 nu:1 nip:6 able:1 pannagadatta:1 beyond:1 usually:1 pattern:4 challenge:1 program:6 including:1 treated:1 rely:3 regularized:4 solvable:1 sian:1 zhu:1 improve:1 axis:1 lij:2 xq:4 text:1 prior:5 ict:1 sg:1 tangent:2 kf:9 l2:5 wisconsin:1 loss:27 expect:1 interesting:2 srebro:1 versus:1 vg:4 digital:1 validation:1 degree:4 xp:2 editor:2 classifying:1 verma:1 r2n:1 translation:3 penalized:2 last:1 bias:1 allow:1 burges:2 wide:2 neighbor:1 absolute:2 sparse:4 kzk:1 boundary:3 gram:1 stand:2 rich:3 world:1 commonly:5 far:1 functionals:26 compact:1 keep:1 assumed:2 xi:35 shwartz:1 search:1 continuous:1 sonar:1 table:3 lapsvm:12 learn:1 robust:3 ca:1 unavailable:1 expansion:3 constructing:1 domain:2 did:4 linearly:1 motivation:1 whole:2 n2:2 repeated:2 x1:9 xu:1 referred:1 fashion:1 transduction:1 slow:1 wiley:1 sub:1 wish:2 exponential:1 xl:4 lie:5 comput:1 hti:1 theorem:20 specific:2 showing:1 nyu:1 x:1 r2:1 offset:1 experimented:1 svm:11 ionosphere:1 incorporating:2 exists:1 xdj:3 restricting:1 mnist:1 magnitude:1 anu:1 easier:1 locality:1 smoothly:1 led:1 simply:1 likely:1 nightmare:1 gaus2:1 expressed:2 scalar:1 sindhwani:2 springer:2 corresponds:1 ubc:1 relies:1 extracted:1 goal:1 formulated:1 presentation:1 sized:1 careful:1 feasible:1 change:4 hard:3 typical:1 infinite:2 determined:1 specifically:1 averaging:1 called:4 total:1 pas:1 invariance:60 experimental:1 mincuts:1 zg:3 select:2 support:10 accelerated:1 incorporate:1 princeton:1 |
4,304 | 4,896 | Learning Kernels Using
Local Rademacher Complexity
Corinna Cortes
Google Research
76 Ninth Avenue
New York, NY 10011
[email protected]
Marius Kloft?
Courant Institute &
Sloan-Kettering Institute
251 Mercer Street
New York, NY 10012
[email protected]
Mehryar Mohri
Courant Institute &
Google Research
251 Mercer Street
New York, NY 10012
[email protected]
Abstract
We use the notion of local Rademacher complexity to design new algorithms for
learning kernels. Our algorithms thereby benefit from the sharper learning bounds
based on that notion which, under certain general conditions, guarantee a faster
convergence rate. We devise two new learning kernel algorithms: one based on
a convex optimization problem for which we give an efficient solution using existing learning kernel techniques, and another one that can be formulated as a
DC-programming problem for which we describe a solution in detail. We also report the results of experiments with both algorithms in both binary and multi-class
classification tasks.
1
Introduction
Kernel-based algorithms are widely used in machine learning and have been shown to often provide
very effective solutions. For such algorithms, the features are provided intrinsically via the choice of
a positive-semi-definite symmetric kernel function, which can be interpreted as a similarity measure
in a high-dimensional Hilbert space. In the standard setting of these algorithms, the choice of the
kernel is left to the user. That choice is critical since a poor choice, as with a sub-optimal choice of
features, can make learning very challenging. In the last decade or so, a number of algorithms and
theoretical results have been given for a wider setting known as that of learning kernels or multiple
kernel learning (MKL) (e.g., [1, 2, 3, 4, 5, 6]). That setting, instead of demanding from the user to
take the risk of specifying a particular kernel function, only requires from him to provide a family
of kernels. Both tasks of selecting the kernel out of that family of kernels and choosing a hypothesis
based on that kernel are then left to the learning algorithm.
One of the most useful data-dependent complexity measures used in the theoretical analysis and
design of learning kernel algorithms is the notion of Rademacher complexity (e.g., [7, 8]). Tight
learning bounds based on this notion were given in [2], improving earlier results of [4, 9, 10].
These generalization bounds provide a strong theoretical foundation for a family of learning kernel
algorithms based on a non-negative linear combination of base kernels. Most of these algorithms,
whether for binary classification or multi-class classification, are based on controlling the trace of
the combined kernel matrix.
This paper seeks to use a finer notion of complexity for the design of algorithms for learning kernels: the notion of local Rademacher complexity [11, 12]. One shortcoming of the general notion
of Rademacher complexity is that it does not take into consideration the fact that, typically, the
hypotheses selected by a learning algorithm have a better performance than in the worst case and
belong to a more favorable sub-family of the set of all hypotheses. The notion of local Rademacher
complexity is precisely based on this idea by considering Rademacher averages of smaller subsets
of the hypothesis set. It leads to sharper learning bounds which, under certain general conditions,
guarantee a faster convergence rate.
?
Alternative address: Memorial Sloan-Kettering Cancer Center, 415 E 68th street, New York, NY 10065,
USA. Email: [email protected].
1
We show how the notion of local Rademacher complexity can be used to guide the design of new
algorithms for learning kernels. For kernel-based hypotheses, the local Rademacher complexity can
be both upper- and lower-bounded in terms of the tail sum of the eigenvalues of the kernel matrix
[13]. This motivates the introduction of two natural families of hypotheses based on non-negative
combinations of base kernels with kernels constrained by a tail sum of the eigenvalues. We study
and compare both families of hypotheses and derive learning kernel algorithms based on both. For
the first family of hypotheses, the algorithm is based on a convex optimization problem. We show
how that problem can be solved using optimization solutions for existing learning kernel algorithms.
For the second hypothesis set, we show that the problem can be formulated as a DC-programming
(difference of convex functions programming) problem and describe in detail our solution. We report
empirical results for both algorithms in both binary and multi-class classification tasks.
The paper is organized as follows. In Section 2, we present some background on the notion of local
Rademacher complexity by summarizing the main results relevant to our theoretical analysis and the
design of our algorithms. Section 3 describes and analyzes two new kernel learning algorithms, as
just discussed. In Section 4, we give strong theoretical guarantees in support of both algorithms. In
Section 5, we report the results of preliminary experiments, in a series of both binary classification
and multi-class classification tasks.
2
Background on local Rademacher complexity
In this section, we present an introduction to local Rademacher complexities and related properties.
2.1
Core ideas and definitions
We consider the standard set-up of supervised learning where the learner receives a sample z1 =
(x1 , y1 ), . . . , zn = (xn , yn ) of size n
1 drawn i.i.d. from a probability distribution P over
Z = X ? Y. Let F be a set of functions mapping from X to Y, and let l : Y ? Y ! [0, 1] be a loss
function. The learning problem is that of selecting a function f 2 F with small risk or expected loss
E[l(f (x), y)]. Let G := l(F, ?) denote the loss class, then, this is equivalent to finding a function
g 2 G with small average E[g]. For convenience, in what follows, we assume that the infimum
of E[g] over G is reached and denote by g ? 2 argming2G E[g] the most accurate predictor in G.
When the infimum is not reached, in the following results, E[g ? ] can be equivalently replaced by
inf g2G E[g].
Definition 1. Let 1 , . . . , n be an i.i.d. family of Rademacher variables taking values 1 and
+1 with equal probability independent of the sample (z1 , . . . , zn ). Then, the global Rademacher
complexity of G is defined as
?
n
1X
Rn (G) := E sup
i g(zi ) .
g2G n i=1
Generalization bounds based on the notion of Rademacher complexity are standard [7]. In particular,
for the empirical risk minimization (ERM) hypothesis gbn , for any > 0, the following bound holds
with probability at least 1
:
s
2 log 2
b
E[b
gn ] E[g ? ] ? 2 sup E[g] E[g]
? 4Rn (G) +
.
(1)
n
g2G
p
Rn (G) is in the order of O(1/ n) for various classes used in practice, including when F is a
kernel class with bounded
p trace and when the loss l is Lipschitz. In such cases, the bound (1)
converges at rate O(1/ n). For some classes G, we may, however, obtain fast rates of up to O(1/n).
The following presentation is based on [12]. Using Talagrand?s inequality, one can show that with
probability at least 1
,
s
8 log 2
3 log 2
E[b
gn ] E[g ? ] ? 8Rn (G) + ?(G)
+
.
(2)
n
n
Here, ?2 (G) := supg2G E[g 2 ] is a bound on the variance of the functions in G. The key idea to
obtain fast rates is to choose a much smaller class Gn? ? G with as small a variance as possible,
while requiring that gbn still lies in Gn? . Since such a small class can also have a substantially smaller
Rademacher complexity Rn (Gn? ), the bound (2) can be sharper than (1).
But how can we find a small class Gn? that is just large enough to contain gbn ? We give some further
background on how to construct such a class in the supplementary material section 1. It turns out
2
Figure
P1: Illustration of the bound (3). The volume of the gray shaded area amounts to the term
?r + j>? j occurring in (3). The left- and right-most figures show the cases of ? too small or too
large, and the center figure the case corresponding to the appropriate value of ?.
that the order of convergence of E[b
gn ] E[g ? ] is determined by the order of the fixed point of the
local Rademacher complexity, defined below.
Definition 2. For any r > 0, the local Rademacher complexity of G is defined as
Rn (G; r) := Rn
g 2 G : E[g 2 ] ? r
.
If the local Rademacher complexity is known, it can be used to compare gbn with g ? , as E[b
gn ] E[g ? ]
can be bounded in terms of the fixed point of the Rademacher complexity of F, besides constantspand
O(1/n) terms. But, while the global Rademacher complexity is generally of the order of O(1/ n)
at best, its local counterpart can converge at orders up to O(1/n). We give an example of such a
class?particularly relevant for this paper?below.
2.2
Kernel classes
The local Rademacher complexity for kernel classes can be accurately described and shown to admit
a simple expression in terms of the eigenvalues of the kernel [13] (cf. also Theorem 6.5 in [11]).
Theorem 3. Let k be a Mercer kernel
P1with corresponding feature map k and reproducing kernel
Hilbert space Hk . Let k(x, x
?) = j=1 j 'j (x)> 'j (?
x) be its eigenvalue decomposition, where
1
( i )i=1 is the sequence of eigenvalues arranged in descending order. Let F := {fw = (x 7!
hw, k (x)i) : kwkHk ? 1}. Then, for every r > 0,
v
s
u X
?
X ?
u2 1
2
E[R(F; r)] ?
min ?r +
= t
min(r, j ).
(3)
j
n ? 0
n j=1
j>?
1
Moreover, there is an absolute constant c such that, if 1
n , then for every r
1
c X
p
min(r, j ) ? E[R(F; r)].
n j=1
1
n,
We summarize the proof of this result in the supplementary material section 2. In view of (3), the
local Rademacher complexity for kernel classes is determined by the tail sum of the eigenvalues.
A core idea of the proof is to optimize over the ?cut-off point? ? of the tail sum of the eigenvalues
in the bound. Solving for the optimal ?, gives a bound in terms of truncated eigenvalues, which is
illustrated in Figure 1.
Consider, for instance, the special case where
the familiar upper bound
pr = 1. We can then recover
P
on the Rademacher complexity: Rn (F) ? Tr(k)/n. But, when j>? j = O(exp( ?)), as in
the case of Gaussian kernels [14], then
?
?
O min ?r + exp( ?)
= O(r log(1/r)).
? 0
p
Therefore, we have R(F; r) = O( nr log(1/r)), which has the fixed point r? = O( log(n)
n ). Thus,
by Theorem 8 (shown in the supplemental material), we have E[b
gn ] E[g ? ] = O( log(n)
n ), which
yields a much stronger learning guarantee.
3
Algorithms
In this section, we will use the properties of the local Rademacher complexity just discussed to
devise a novel family of algorithms for learning kernels.
3
3.1
Motivation and analysis
Most learning kernel algorithms are based on a family of hypotheses based on a kernel k? =
PM
m=1 ?m km that is a non-negative linear combination of M base kernels. This is described by
the following hypothesis class:
H := fw,k? = x 7! hw, k? (x)i : kwkHk ? ?, ? ? 0 .
?
It is known that the Rademacher complexity of H can be upper-bounded in terms of the trace of
the combined kernel. Thus, most existing algorithms for learning kernels [1, 4, 6] add the following
constraint to restrict H:
Tr(k? ) ? 1.
(4)
As we saw in the previous section, however, the tail sum of the eigenvalues of the kernel, rather than
its trace, determines the local Rademacher complexity. Since the local Rademacher complexity can
lead to tighter generalization bounds than the global Rademacher complexity, this motivates us to
consider the following hypothesis class for learning kernels:
X
H1 := fw,k? 2 H :
j (k? ) ? 1 .
j>?
Here, ? is a free parameter controlling the tail sum. The trace is a linear function and thus the
constraint
P (4) defines a half-space, therefore a convex set, in the space of kernels. The function
k 7! j>? j (k), however, is concave since it can be expressed as the difference of the trace and
the sum of the ? largest eigenvalues, which is a convex function.
Nevertheless, the following upper bound holds, denoting ?
?m := ?m / k?k1 ,
?
M
M
M
X
X
X
X
X ?X
?m
?
?m
?
?m k?k1 km ,
j (km ) =
j (k?k1 km ) ?
j
m=1
j>?
m=1
j>?
j>?
m=1
|
{z
=k?
(5)
}
where the equality holds by linearity and the inequality by the concavity just discussed. This leads
us to consider alternatively the following class
?
M
X
X
H2 := fw,k? 2 H :
?m
j (km ) ? 1 .
m=1
j>?
The class H2 is convex because it is the restriction of the convex class H via a linear inequality
constraint. H2 is thus more convenient to work with. The following proposition helps us compare
these two families.
Proposition 4. The following statements hold for the sets H1 and H2 :
1. (a) H1 ? H2
2. (b) If ? = 0, then H1 = H2 .
3. (c) Let ? > 0. There exist kernels k1 , . . . , kM and a probability measure P such that
H 1 ( H2 .
The proposition shows that, in general, the convex class H2 can be larger than H1 . The following
result shows that in general an even stronger result holds.
Proposition 5. Let ? > 0. There exist kernels k1 , . . . , kM and a probability measure P such that
conv(H1 ) ( H2 .
The proofs of these propositions are given in the supplemental material. These results show that in
general H2 could be a richer class than H1 and even its convex hull. This would suggest working
with H1 to further limit the risk of overfitting, however, as already pointed out, H2 is more convenient since it is a convex class. Thus, in the next section, we will consider both hypothesis sets and
introduce two distinct learning kernel algorithms, each based on one of these families.
3.2 Convex optimization algorithm
The simpler algorithm performs regularized empirical risk minimization based on the convex
class H2 . Note that by a renormalization of the kernels k1 , . . . , kM , according to k?m :=
P
PM
( j>? j (km )) 1 km and k?? = m=1 ?m k?m , we can simply rewrite H2 as
?
? 2 := f ? = (x 7! hw, ? (x)i), kwk
H2 = H
(6)
H ? ? ?, ? ? 0, k?k1 ? 1 ,
w,k?
k?
k?
4
which is the commonly studied hypothesis class in multiple kernel learning. Of course, in practice,
we replace the empirical version of the kernel k by the kernel matrix K = (k(xi , xj ))ni,j=1 , and
consider 1 , . . . , n as the eigenvalues of the kernel matrix and not of the kernel itself. Hence, we
can easily exploit existing software solutions:
P
1. For all m = 1, . . . , M , compute j>? j (Km );
?
2. For
P all m = 11, . . . , M , normalize the kernel matrices according to Km
( j>? j (Km )) Km ;
:=
3. Use any of the many existing (`1 -norm) MKL solvers to compute the minimizer of ERM
? 2.
over H
Note that the tail sum can be computed in O(n2 ?) for each kernel because it is sufficient to compute
P
P?
the ? largest eigenvalues and the trace: j>? j (Km ) = Tr(Km )
j=1 j (Km ).
3.3
DC-programming
In the more challenging case, we perform penalized ERM over the class H1 , that is, we aim to solve
n
X
X
1
2
min kwkHK + C
l(yi fw,K? (xi ))
s.t.
(7)
j (K? ) ? 1 .
?
w 2
i=1
j>?
P
This is a convex optimization problem with an additional concave constraint j>? j (K? ) ? 1.
This constraint is not differentiable, but it admits a subdifferential at any point ?0 2 RM . Denote the
subdifferential of the function ? 7! j (K? ) by @?0 j (K?0 ) := {v 2 RM : j (K? )
j (K?0 )
hv, ? ?0 i, 8? 2 RM }. Moreover, let u1 , . . . , un be the eigenvectors of K?0 sorted in descending
P
order. Defining vm := j>? u>
j Km uj , one can verify?using the sub-differentiability of the max
P
operator?that v = (v1 , . . . , vM )> is contained in the subdifferential @?0 j>? j (K?0 ). Thus,
we can linearly approximate the constraint, for any ?0 2 RM , via
X
X
?0 i =
u>
j (K? ) ? hv, ?
j K? ?0 uj .
j>?
j>?
We can thus tackle problem (7) using the DCA algorithm [15], which in this context reduces to
alternating between the linearization of the concave constraint and solving the resulting convex
problem, that is, for any ?0 2 RM ,
n
X
1
2
min
kwkHK + C
l(fw,K? (xi ), yi )
?
w ??0 2
i=1
(8)
X
s.t.
u>
K
u
?
1.
(? ?0 ) j
j
j>?
Note that ?0 changes in every iteration and so may also do the eigenvectors u1 , . . . , un of K?0 ,
until the DCA algorithm converges. The DCA algorithm is proven to converge to a local minimum,
even when the concave function is not differentiable [15]. The algorithm is also close to the CCCP
algorithm of Yuille and Rangarajan [16], modulo the use of subgradients instead of the gradients.
To solve (8), we alternate the optimization with respect to ? and w. Note that, for fixed w, we can
compute the optimal ? analytically. Up to normalization the following holds:
v
u
2
u
kwkHk
?
8m = 1, . . . , M : ?m = t P
.
(9)
>K u
u
m j
j>? j
A very similar optimality expression has been used in the context the group Lasso and `p -norm
multiple kernel learning by [3]. In turn, we need to compute a w that is optimal in (8), for fixed
?. We perform this computation in the dual; e.g., for the hinge loss l(t, y) = max(0, 1 ty), this
reduces to a standard support vector machine (SVM) [17, 18] problem,
1
max 1> ?
(? y)> K? (? y),
(10)
0 ? C
2
where denotes the Hadamard product.
5
Algorithm 1 (DC ALGORITHM FOR LEARNING KERNELS BASED ON THE LOCAL R ADEMACHER
COMPLEXITY ).
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
input: kernel matrix K = (k(xi , xj ))n
i,j=1 and labels y1 , . . . , yn 2 { 1, 1}, optimization precision "
initialize ?m := 1/M for all m = 1, . . . , M
while optimality conditions are not satisfied within tolerance ? do
SVM training: compute a new ? by solving the SVM problem (10)
eigenvalue computation: compute eigenvalues u1 , . . . , un of K?
store ?0 := ?
? update: compute a new
P ? according to (9) using (11)
normalize ? such that j>? uj K(? ?0 ) uj = 1
end while
SVM training: solve (10) with respect to ?
output: ?-accurate ? and kernel weights ?
2
For the computation of (9), we can recover the term kwkHk corresponding to the ? that is optimal
?
in (10) via
2
kwkHK = ?2m (? y)> Km (? y),
(11)
?
which follows from the KKT conditions with respect to (10). In summary, the proposed algorithm,
which is shown in Algorithm Table 1, alternatingly optimizes ? and ?, where prior to each ? step
the linear approximation is updated by computing an eigenvalue decomposition of K? .
In the discussion that precedes, for the sake of simplicity of the presentation, we restricted ourselves
to the case of an `1 -regularization, that is we showed how the standard trace-regularization can be
replaced by a regularization based on the tail-sum of the eigenvalues. It should be clear that in the
same way we can replace the familiar `p -regularization used in learning kernel algorithms [3] for
p 1 with `p -regularization in terms of the tail eigenvalues. In fact, as in the `1 case, in the `p case,
our convex optimization algorithm can be solved using existing MKL optimization solutions. The
results we report in Section 5 will in fact also include those obtained by using the `2 version of our
algorithm.
4
Learning guarantees
An advantage of the algorithms presented is that they benefit from strong theoretical guarantees.
Since H1 ? H2 , it is sufficient to present these guarantees for H2 ?any bound that holds for H2 a
fortiori holds for H1 . To present the result, recall from Section 3.2 that, by a re-normalization of the
? 2 , as defined in (6). Thus, the algorithms presented
kernels, we may equivalently express H2 by H
enjoy the following bound on the local Rademacher complexity, which was shown in [19] (Theorem
5). Similar results were shown in [20, 21].
Theorem 6 (Local Rademacher complexity). Assume that the kernels are uniformly bounded (for
e 2 can be
all m, kk?m k1 < 1) and uncorrelated. Then, the local Rademacher complexity of H
bounded as follows:
v
!
u
? ?
1
?
?
X
u 16e
1
2
2
2
?
t
e
R(H2 ; r) ?
max
min r, e ? log (M ) j (km )
+ O
.
m=1,...,M
n
n
j=1
Note that we show the result under the assumption of uncorrelated kernels only for simplicity of
presentation. More generally, a similar result holds for correlated kernels and arbitrary p
1
(cf. [19], Theorem 5). Subsequently, we can derive the following bound on the excess risk from
Theorem 6 using a result of [11] (presented as Theorem 8 in the supplemental material 1).
Theorem 7. Let l(t, y) = 12 (t y)2 be the squared loss. Assume that for all m, there exists d such
that j (k?m ) ? dj for some > 1 (this is a common assumption and, for example, met for finite
rank kernels and Gaussian kernels [14]). Then, under the assumptions of the previous theorem, for
any > 0, with probability at least 1
over the draw of the sample, the excess loss of the class
? 2 can be bounded as follows:
H
r
? ?
1
1
1
3
1
2
?
2
1+
+1
+1
+1
E[b
gn ] E[g ] ? 186
4d? log (M )
2
e(M/e)
n
+ O
.
1
n
6
3
0.88
0.86
0.84
100
250
n
l1
l2
unif
conv
dc
1,000
l1
l2
conv
dc
2
1
0
2
n=100
log(tailsum(?)
AUC
0.9
kernel weight
0.92
1
0
?1
TSS Promo 1st Ex Angle Energ
85.2 80.9 85.8 55.6 72.1
n=100
0
TSS
Promo
1st Ex
Angle
Energ
50
?
100
Figure 2: Results of the TSS experiment. L EFT: average AUCs of the compared algorithms. C EN TER : for each kernel,
P the average kernel weight and single-kernel AUC. R IGHT: for each kernel
Km , the tail sum j>? j as a function of the eigenvalue cut-off point ?.
?
?
1
1
We observe that the above bound converges in O log2 (M ) 1+ M +1 n 1+ . This can be almost
p
as slow as O log(M )/ n (when ? 1) and almost as fast as O M/n (when letting ! 1).
The latter is the case, for instance, for finite-rank or Gaussian kernels.
5
Experiments
In this section, we report the results of experiments with the two algorithms we introduced, which
we will denote by conv and dc in short. We will compare our algorithms with the classical `1 -norm
MKL (denoted by l1) and the more recent `2 -norm MKL [3] (denoted by l2). We also measure
the performance of the uniform kernel combination, denoted by unif, which has frequently been
shown to achieve competitive performances [22]. In all experiments, we use the hinge loss as a loss
function, including a bias term.
5.1
Transcription Start Site Detection
Our first experiment aims at detecting transcription start sites (TSS) of RNA Polymerase II binding
genes in genomic DNA sequences. We experiment on the TSS data set, which we downloaded
from http://mldata.org/. This data set, which is a subset of the data used in the larger study
of [23], comes with 5 kernels, capturing various complementary aspects: a weighted-degree kernel
representing the TSS signal TSS, two spectrum kernels around the promoter region (Promo) and
the 1st exon (1st Ex), respectively, and two linear kernels based on twisting angles (Angle) and
stacking energies (Energ), respectively. The SVM based on the uniform combination of these 5
kernels was found to have the highest overall performance among 19 promoter prediction programs
[24], it therefore constitutes a strong baseline. To be consistent with previous studies [24, 3, 23], we
will use the area under the ROC curve (AUC) as an evaluation criterion.
All kernel matrices Km were normalized such that Tr(Km ) = n for all m, prior to the experiment.
SVM computations were performed using the SHOGUN toolbox [25]. For both conv and dc, we
experiment with `1 - and `2 -norms. We randomly drew an n-elemental training set and split the
remaining set into validation and test sets of equal size. The random partitioning was repeated 100
times. We selected the optimal model parameters ? 2 {2i , i = 0, 1, . . . , 4} and C 2 {10 i , i =
2, 1, 0, 1, 2} on the validation set, based on their maximal mean AUC, and report mean AUCs on
the test set as well as standard deviations (the latter are within the interval [1.1, 2.5] and are shown in
detail in the supplemental material 4). The experiment was carried out for all n 2 {100, 250, 1000}.
Figure 2 (left) shows the mean AUCs on the test sets.
We observe that unif and l2 outperform l1, except when n = 100, in which case the three methods are on par. This is consistent with the result reported by [3]. For all sample sizes investigated,
conv and dc yield the highest AUCs.
We give a brief explanation for the outcome of the experiment. To further investigate, we compare
the average kernel weights ? output by the compared algorithms (for n = 100). They are shown
in Figure 2 (center), where we report, below each kernel, also its performance in terms of its AUC
when training an SVM on that single kernel alone. We observe that l1 focuses on the TSS kernel
using the TSS signal, which has the second highest AUC among the kernels (85.2). However, l1
discards the 1st exon kernel, which also has a high predictive performance (AUC of 85.8). A similar
order of kernel importance is determined by l2, but which distributes the weights more broadly,
7
Table 1: The training split (sp) fraction, dataset size (n), and multi-class accuracies shown with ?1
standard error. The performance results for MKL and conv correspond to the best values obtained
using either `1 -norm or `2 -norm regularization.
sp
n
unif
MKL
conv
?
plant
nonpl
psortPos
psortNeg
protein
0.5
0.5
0.8
0.5
0.5
940
2732
541
1444
694
91.1 ? 0.8
87.2 ? 1.6
90.5 ? 3.1
90.3 ? 1.8
57.2 ? 2.0
90.6 ? 0.9
87.7 ? 1.3
90.6 ? 3.4
90.7 ? 1.2
57.2 ? 2.0
91.4 ? 0.7
87.6 ? 0.9
90.8 ? 2.8
91.2 ? 1.3
59.6 ? 2.4
32
4
1
8
8
while still mostly focusing on the TSS kernel. In contrast, conv and dc distribute their weight only
over the TSS, Promoter, and 1st Exon kernels, which are also the kernels that also have the highest
predictive accuracies. The considerably weaker kernels Angle and Energ are discarded.
But why are Angle and Energ discarded? This can be explained by means of Figure 2 (right),
where we show the tail sum of each kernel as a function of the cut-off point ?. We observe that
Angle and Energ have only moderately large first and second eigenvalues, which is why they
hardly profit when using conv or dc. The Promo and Exon kernels, however, which are discarded
by l1, have a large first (and also second) eigenvalues, which is why they are promoted by conv or
dc. Indeed, the model selection determines the optimal cut-off, for both conv and dc, for ? = 1.
5.2
Multi-class Experiments
We next carried out a series of experiments with the conv algorithm in the multi-class classification
setting, that repeatedly has demonstrated amenable to MKL learning [26, 27]. As described in
Section 3.2 the conv problem can be solved by simply re-normalizing the kernels by the tail sum of
the eigenvalues and making use of any `p -norm MKL solver. For our experiments, we used the ufo
algorithm [26] from the DOGMA toolbox http://dogma.sourceforge.net/. For both conv
and ufo we experiment both with `1 and `2 regularization and report the best performance achieved
in each case.
We used the data sets evaluated in [27] (plant, nonpl, psortPos, and psortNeg), which consist of
either 3 or 4 classes and use 69 biologically motivated sequence kernels.1 Furthermore, we also
considered the proteinFold data set of [28], which consists of 27 classes and uses 12 biologically
motivated base kernels.2
The results are summarized in Table 1: they represent mean accuracy values with one standard
deviation as computed over 10 random splits of the data into training and test folds. The fraction of
the data used for training, as well as the total number of examples, is also shown. The optimal value
for the parameter ? 2 {2i , i = 0, 1, . . . , 8} was determined by cross-validation. For the parameters
? and C of the ufo algorithm we followed the methodology of [26]. For plant, psortPos, and
psortNeg, the results show that conv leads to a consistent improvement in a difficult multi-class
setting, although we cannot attest to their significance due to the insufficient size of the data sets.
They also demonstrate a significant performance improvement over l1 and unif in the proteinFold
data set, a more difficult task where the classification accuracies are below 60%.
6
Conclusion
We showed how the notion of local Rademacher complexity can be used to derive new algorithms for
learning kernels by using a regularization based on the tail sum of the eigenvalues of the kernels. We
introduced two natural hypothesis sets based on that regularization, discussed their relationships, and
showed how they can be used to design an algorithm based on a convex optimization and one based
on solving a DC-programming problem. Our algorithms benefit from strong learning guarantees.
Our empirical results show that they can lead to performance improvement in some challenging
tasks. Finally, our analysis based on local Rademacher complexity could be used as the basis for the
design of new learning kernel algorithms.
Acknowledgments
We thank Gunnar R?atsch for helpful discussions. This work was partly funded by the NSF award
IIS-1117591 and a postdoctoral fellowship funded by the German Research Foundation (DFG).
1
2
Accessible from http://raetschlab.org//projects/protsubloc.
Accessible from http://mkl.ucsd.edu/dataset/protein-fold-prediction.
8
References
[1] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan, ?Multiple kernel learning, conic duality, and the SMO
algorithm,? in Proc. 21st ICML, ACM, 2004.
[2] C. Cortes, M. Mohri, and A. Rostamizadeh, ?Generalization bounds for learning kernels,? in Proceedings,
27th ICML, pp. 247?254, 2010.
[3] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien, ?`p -norm multiple kernel learning,? Journal of Machine
Learning Research, vol. 12, pp. 953?997, Mar 2011.
[4] G. Lanckriet, N. Cristianini, L. E. Ghaoui, P. Bartlett, and M. I. Jordan, ?Learning the kernel matrix with
semi-definite programming,? JMLR, vol. 5, pp. 27?72, 2004.
[5] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet, ?SimpleMKL,? J. Mach. Learn. Res., vol. 9,
pp. 2491?2521, 2008.
[6] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf, ?Large scale multiple kernel learning,? Journal
of Machine Learning Research, vol. 7, pp. 1531?1565, July 2006.
[7] P. Bartlett and S. Mendelson, ?Rademacher and gaussian complexities: Risk bounds and structural results,? Journal of Machine Learning Research, vol. 3, pp. 463?482, Nov. 2002.
[8] V. Koltchinskii and D. Panchenko, ?Empirical margin distributions and bounding the generalization error
of combined classifiers,? Annals of Statistics, vol. 30, pp. 1?50, 2002.
[9] N. Srebro and S. Ben-David, ?Learning bounds for support vector machines with learned kernels,? in
Proc. 19th COLT, pp. 169?183, 2006.
[10] Y. Ying and C. Campbell, ?Generalization bounds for learning the kernel problem,? in COLT, 2009.
[11] P. L. Bartlett, O. Bousquet, and S. Mendelson, ?Local Rademacher complexities,? Ann. Stat., vol. 33,
no. 4, pp. 1497?1537, 2005.
[12] V. Koltchinskii, ?Local Rademacher complexities and oracle inequalities in risk minimization,? Annals of
Statistics, vol. 34, no. 6, pp. 2593?2656, 2006.
[13] S. Mendelson, ?On the performance of kernel classes,? J. Mach. Learn. Res., vol. 4, pp. 759?771, December 2003.
[14] B. Sch?olkopf and A. Smola, Learning with Kernels. Cambridge, MA: MIT Press, 2002.
[15] P. D. Tao and L. T. H. An, ?A DC optimization algorithm for solving the trust-region subproblem,? SIAM
Journal on Optimization, vol. 8, no. 2, pp. 476?505, 1998.
[16] A. L. Yuille and A. Rangarajan, ?The concave-convex procedure,? Neural Computation, vol. 15, pp. 915?
936, Apr. 2003.
[17] C. Cortes and V. Vapnik, ?Support vector networks,? Machine Learning, vol. 20, pp. 273?297, 1995.
[18] B. Boser, I. Guyon, and V. Vapnik, ?A training algorithm for optimal margin classifiers,? in Proc. 5th
Annual ACM Workshop on Computational Learning Theory (D. Haussler, ed.), pp. 144?152, 1992.
[19] M. Kloft and G. Blanchard, ?On the convergence rate of `p -norm multiple kernel learning,? Journal of
Machine Learning Research, vol. 13, pp. 2465?2502, Aug 2012.
[20] V. Koltchinskii and M. Yuan, ?Sparsity in multiple kernel learning,? Ann. Stat., vol. 38, no. 6, pp. 3660?
3695, 2010.
[21] T. Suzuki, ?Unifying framework for fast learning rate of non-sparse multiple kernel learning,? in Advances
in Neural Information Processing Systems 24, pp. 1575?1583, 2011.
[22] P. Gehler and S. Nowozin, ?On feature combination for multiclass object classification,? in International
Conference on Computer Vision, pp. 221?228, 2009.
[23] S. Sonnenburg, A. Zien, and G. R?atsch, ?Arts: Accurate recognition of transcription starts in human,?
Bioinformatics, vol. 22, no. 14, pp. e472?e480, 2006.
[24] T. Abeel, Y. V. de Peer, and Y. Saeys, ?Towards a gold standard for promoter prediction evaluation,?
Bioinformatics, 2009.
[25] S. Sonnenburg, G. R?atsch, S. Henschel, C. Widmer, J. Behr, A. Zien, F. de Bona, A. Binder, C. Gehl, and
V. Franc, ?The SHOGUN Machine Learning Toolbox,? J. Mach. Learn. Res., 2010.
[26] F. Orabona and L. Jie, ?Ultra-fast optimization algorithm for sparse multi kernel learning,? in Proceedings
of the 28th International Conference on Machine Learning, 2011.
[27] A. Zien and C. S. Ong, ?Multiclass multiple kernel learning,? in ICML 24, pp. 1191?1198, ACM, 2007.
[28] T. Damoulas and M. A. Girolami, ?Probabilistic multi-class multi-kernel learning: on protein fold recognition and remote homology detection,? Bioinformatics, vol. 24, no. 10, pp. 1264?1270, 2008.
[29] P. Bartlett and S. Mendelson, ?Empirical minimization,? Probab. Theory Related Fields, vol. 135(3),
pp. 311?334, 2006.
[30] A. B. Tsybakov, ?Optimal aggregation of classifiers in statistical learning,? Ann. Stat., vol. 32, pp. 135?
166, 2004.
9
| 4896 |@word version:2 stronger:2 norm:10 unif:5 km:23 seek:1 decomposition:2 thereby:1 tr:4 profit:1 series:2 selecting:2 denoting:1 existing:6 com:1 update:1 alone:1 half:1 selected:2 core:2 short:1 detecting:1 org:3 simpler:1 yuan:1 consists:1 introduce:1 indeed:1 expected:1 p1:1 frequently:1 multi:11 considering:1 solver:2 conv:16 provided:1 project:1 bounded:7 moreover:2 linearity:1 what:1 e472:1 interpreted:1 substantially:1 supplemental:4 finding:1 guarantee:8 every:3 concave:5 tackle:1 shogun:2 rm:5 classifier:3 partitioning:1 enjoy:1 yn:2 positive:1 local:27 limit:1 mach:3 simplemkl:1 koltchinskii:3 studied:1 specifying:1 challenging:3 shaded:1 binder:1 acknowledgment:1 practice:2 definite:2 procedure:1 area:2 empirical:7 convenient:2 polymerase:1 suggest:1 protein:3 cannot:1 close:1 selection:1 operator:1 convenience:1 risk:8 context:2 descending:2 optimize:1 equivalent:1 map:1 restriction:1 center:3 demonstrated:1 convex:17 simplicity:2 haussler:1 notion:12 updated:1 annals:2 controlling:2 user:2 modulo:1 programming:6 us:1 hypothesis:16 lanckriet:2 recognition:2 particularly:1 cut:4 gehler:1 subproblem:1 solved:3 hv:2 worst:1 region:2 sonnenburg:4 remote:1 highest:4 panchenko:1 complexity:38 moderately:1 cristianini:1 ong:1 tight:1 solving:5 rewrite:1 dogma:2 predictive:2 yuille:2 learner:1 basis:1 exon:4 easily:1 various:2 distinct:1 fast:5 describe:2 effective:1 shortcoming:1 precedes:1 choosing:1 outcome:1 peer:1 richer:1 widely:1 supplementary:2 larger:2 solve:3 statistic:2 itself:1 sequence:3 eigenvalue:22 differentiable:2 advantage:1 net:1 brefeld:1 product:1 maximal:1 relevant:2 hadamard:1 achieve:1 gold:1 normalize:2 olkopf:2 sourceforge:1 elemental:1 convergence:4 rangarajan:2 rademacher:37 converges:3 ben:1 object:1 wider:1 derive:3 help:1 protsubloc:1 stat:3 aug:1 strong:5 come:1 met:1 girolami:1 hull:1 subsequently:1 human:1 material:6 generalization:6 preliminary:1 proposition:5 tighter:1 ultra:1 hold:9 around:1 considered:1 exp:2 mapping:1 favorable:1 proc:3 label:1 saw:1 him:1 largest:2 weighted:1 minimization:4 mit:1 genomic:1 gaussian:4 rna:1 aim:2 rather:1 focus:1 improvement:3 rank:2 hk:1 contrast:1 baseline:1 summarizing:1 rostamizadeh:1 helpful:1 dependent:1 typically:1 tao:1 overall:1 classification:9 dual:1 among:2 denoted:3 colt:2 constrained:1 special:1 initialize:1 art:1 equal:2 construct:1 field:1 icml:3 constitutes:1 report:8 franc:1 randomly:1 dfg:1 familiar:2 replaced:2 ourselves:1 detection:2 tss:11 investigate:1 evaluation:2 e480:1 amenable:1 accurate:3 re:5 theoretical:6 instance:2 earlier:1 gn:10 zn:2 stacking:1 deviation:2 subset:2 rakotomamonjy:1 predictor:1 uniform:2 too:2 reported:1 considerably:1 combined:3 st:7 international:2 siam:1 kloft:4 accessible:2 probabilistic:1 off:4 vm:2 squared:1 satisfied:1 choose:1 admit:1 distribute:1 de:2 summarized:1 blanchard:1 sloan:2 damoulas:1 performed:1 view:1 h1:11 kwk:1 sup:2 reached:2 competitive:1 recover:2 start:3 attest:1 aggregation:1 ni:1 accuracy:4 variance:2 ufo:3 yield:2 correspond:1 accurately:1 finer:1 alternatingly:1 ed:1 email:1 definition:3 ty:1 energy:1 pp:24 proof:3 dataset:2 intrinsically:1 recall:1 hilbert:2 organized:1 campbell:1 dca:3 focusing:1 courant:2 supervised:1 methodology:1 arranged:1 evaluated:1 mar:1 furthermore:1 just:4 smola:1 until:1 talagrand:1 working:1 receives:1 trust:1 saeys:1 google:3 mkl:10 defines:1 infimum:2 gray:1 usa:1 normalized:1 requiring:1 contain:1 verify:1 counterpart:1 equality:1 hence:1 analytically:1 alternating:1 symmetric:1 regularization:9 illustrated:1 widmer:1 auc:11 criterion:1 demonstrate:1 performs:1 l1:8 consideration:1 novel:1 common:1 volume:1 belong:1 tail:13 discussed:4 eft:1 significant:1 cambridge:1 pm:2 canu:1 pointed:1 dj:1 funded:2 afer:1 similarity:1 base:4 add:1 fortiori:1 showed:3 recent:1 inf:1 optimizes:1 discard:1 store:1 certain:2 inequality:4 binary:4 yi:2 devise:2 analyzes:1 additional:1 minimum:1 promoted:1 converge:2 ight:1 signal:2 semi:2 ii:2 multiple:10 zien:4 july:1 reduces:2 memorial:1 faster:2 cross:1 bach:2 cccp:1 award:1 prediction:3 vision:1 iteration:1 kernel:113 normalization:2 represent:1 achieved:1 argming2g:1 background:3 subdifferential:3 fellowship:1 interval:1 sch:3 december:1 jordan:2 structural:1 ter:1 split:3 enough:1 xj:2 zi:1 restrict:1 lasso:1 idea:4 avenue:1 multiclass:2 whether:1 expression:2 motivated:2 bartlett:4 york:4 hardly:1 repeatedly:1 jie:1 useful:1 generally:2 clear:1 eigenvectors:2 amount:1 tsybakov:1 differentiability:1 dna:1 http:4 outperform:1 exist:2 nsf:1 broadly:1 vol:18 express:1 group:1 key:1 gunnar:1 nevertheless:1 drawn:1 v1:1 fraction:2 sum:13 angle:7 family:12 almost:2 guyon:1 draw:1 capturing:1 bound:23 followed:1 fold:3 oracle:1 annual:1 precisely:1 constraint:7 software:1 sake:1 bousquet:1 u1:3 aspect:1 min:7 optimality:2 subgradients:1 marius:1 according:3 alternate:1 combination:6 poor:1 smaller:3 describes:1 making:1 biologically:2 explained:1 restricted:1 pr:1 erm:3 ghaoui:1 turn:2 german:1 letting:1 end:1 observe:4 appropriate:1 alternative:1 corinna:2 denotes:1 remaining:1 cf:2 include:1 log2:1 hinge:2 unifying:1 exploit:1 energ:6 k1:8 uj:4 classical:1 already:1 nr:1 gradient:1 thank:1 street:3 besides:1 relationship:1 illustration:1 kk:1 insufficient:1 ying:1 equivalently:2 difficult:2 mostly:1 sharper:3 statement:1 trace:8 negative:3 bona:1 design:7 motivates:2 perform:2 upper:4 discarded:3 finite:2 truncated:1 defining:1 dc:15 y1:2 rn:8 ninth:1 reproducing:1 arbitrary:1 ucsd:1 introduced:2 david:1 toolbox:3 z1:2 twisting:1 smo:1 learned:1 boser:1 address:1 below:4 sparsity:1 summarize:1 program:1 including:2 max:4 explanation:1 critical:1 demanding:1 natural:2 regularized:1 representing:1 brief:1 cim:2 conic:1 homology:1 carried:2 cbio:1 prior:2 probab:1 l2:5 loss:9 par:1 plant:3 proven:1 srebro:1 mldata:1 foundation:2 h2:19 downloaded:1 degree:1 validation:3 sufficient:2 consistent:3 mercer:3 grandvalet:1 uncorrelated:2 nowozin:1 cancer:1 course:1 mohri:3 penalized:1 summary:1 last:1 free:1 guide:1 bias:1 weaker:1 institute:3 supg2g:1 taking:1 absolute:1 sparse:2 benefit:3 tolerance:1 curve:1 xn:1 concavity:1 commonly:1 suzuki:1 excess:2 approximate:1 nov:1 transcription:3 gene:1 global:3 overfitting:1 kkt:1 xi:4 alternatively:1 spectrum:1 postdoctoral:1 un:3 decade:1 gbn:4 why:3 table:3 behr:1 learn:3 correlated:1 improving:1 mehryar:1 investigated:1 sp:2 significance:1 main:1 promoter:4 linearly:1 apr:1 motivation:1 bounding:1 n2:1 repeated:1 complementary:1 x1:1 site:2 g2g:3 en:1 roc:1 renormalization:1 ny:4 slow:1 precision:1 sub:3 lie:1 jmlr:1 hw:3 theorem:10 nyu:2 cortes:3 admits:1 svm:7 normalizing:1 exists:1 consist:1 mendelson:4 vapnik:2 workshop:1 drew:1 importance:1 linearization:1 occurring:1 margin:2 simply:2 expressed:1 contained:1 u2:1 binding:1 minimizer:1 determines:2 acm:3 ma:1 sorted:1 formulated:2 presentation:3 ann:3 towards:1 orabona:1 lipschitz:1 replace:2 fw:6 change:1 determined:4 except:1 ademacher:1 uniformly:1 distributes:1 total:1 partly:1 duality:1 atsch:4 support:4 latter:2 bioinformatics:3 ex:3 |
4,305 | 4,897 | Inverse Density as an Inverse Problem:
the Fredholm Equation Approach
Qichao Que, Mikhail Belkin
Department of Computer Science and Engineering
The Ohio State University
{que,mbelkin}@cse.ohio-state.edu
Abstract
We address the problem of estimating the ratio pq where p is a density function
and q is another density, or, more generally an arbitrary function. Knowing or approximating this ratio is needed in various problems of inference and integration
often referred to as importance sampling in statistical inference. It is also closely
related to the problem of covariate shift in transfer learning. Our approach is
based on reformulating the problem of estimating the ratio as an inverse problem
in terms of an integral operator corresponding to a kernel, known as the Fredholm
problem of the first kind. This formulation, combined with the techniques of regularization leads to a principled framework for constructing algorithms and for
analyzing them theoretically. The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized Estimator) is flexible, simple and easy to implement.
We provide detailed theoretical analysis including concentration bounds and convergence rates for the Gaussian kernel for densities defined on Rd and smooth
d-dimensional sub-manifolds of the Euclidean space.
Model selection for unsupervised or semi-supervised inference is generally a difficult problem. It turns out that in the density ratio estimation setting, when samples
from both distributions are available, simple completely unsupervised model selection methods are available. We call this mechanism CD-CV for Cross-Density
Cross-Validation. We show encouraging experimental results including applications to classification within the covariate shift framework.
1
Introduction
q(x)
In this paper we address the problem of estimating the ratio of two functions, p(x)
where p is given
by a sample and q(x) is either a known function or another probability density function given by a
sample. This estimation problem arises naturally when one attempts to integrate a function with respect to one density, given its values on a sample obtained from another distribution. Recently there
have been a significant amount of work on estimating the density ratio (also known as the importance function) from sampled data, e.g., [6, 10, 9, 22, 2]. Many of these papers consider this problem
in the context of covariate shift assumption [19] or the so-called selection bias [27]. The approach
taken in our paper is based on reformulating the density ratio estimation as an integral equation,
known as the Fredholm equation of the first kind, and solving it using the tools of regularization
and Reproducing Kernel Hilbert Spaces. That allows us to develop simple and flexible algorithms
for density ratio estimation within the popular kernel learning framework. The connection to the
classical operator theory setting makes it easier to apply the standard tools of spectral and Fourier
analysis to obtain theoretical results.
We start with the following simple equality underlying the importance sampling method:
Z
Z
q(x)
q(x)
Eq (h(x)) = h(x)q(x)dx = h(x)
p(x)dx = Ep h(x)
p(x)
p(x)
1
(1)
By replacing the function h(x)
Z with a kernel k(x, y), we
Z obtain
q
q(y)
Kp (x) := k(x, y)
p(y)dy = k(x, y)q(y)dy := Kq 1(x).
p
p(y)
(2)
q(x)
Thinking of the function p(x)
as an unknown quantity and assuming that the right hand side is known
this becomes a Fredholm integral equation. Note that the right-hand side can be estimated given a
sample from q while the operator on the left can be estimated using a sample from p.
To push this idea further, suppose kt (x, y) is a ?local? kernel, (e.g., the Gaussian, kt (x, y) =
R
kx?yk2
1
? 2t
) such that Rd kt (x, y)dx = 1. When we use ?-kernels, like Gaussian, and f
e
d/2
(2?t)
R
satisfies some smoothness conditions, we have Rd kt (x, y)f (x)dx = f (y) + O(t) (see [24], Ch.
1). Thus we get another (approximate) integral equality:
Z
q(x)
q
kt (x, y)
p(x)dx ? q(y).
(3)
Kt,p (y) :=
p
p(x)
d
R
It becomes an integral equation for
q(x)
p(x) ,
assuming that q is known or can be approximated.
We address these inverse problems by formulating them within the classical framework of TiknonovPhilips regularization with the penalty term corresponding to the norm of the function in the Reproducing Kernel Hilbert Space H with kernel kH used in many machine learning algorithms.
q
q
[Type I]: ? arg min kKp f ?Kq 1(x)k2L2,p +?kf k2H [II]: ? arg min kKt,p f ?qk2L2,p +?kf k2H
f ?H
f ?H
p
p
Importantly, given a sample x1 , . . . , xn from p, the integral operator KpP
f applied to a function f
can be approximated by the corresponding discrete sum Kp f (x) ? n1 i f (xi )K(xi , x), while
P
L2,p norm is approximated by an average: kf k2L2,p ? n1 i f (xi )2 . Of course, the same holds for
a sample from q. We see that the Type I formulation is useful when q is a density and samples from
both p and q are available, while the Type II is useful, when the values of q (which does not have to
be a density function at all1 ) are known at the data points sampled from p.
Since all of these involve only function evaluations at the sample points, an application of the usual
representer theorem for Reproducing Kernel Hilbert Spaces, leads to simple, explicit and easily
implementable algorithms, representing the solution P
of the optimization problem as linear combinations of the kernels over the points of the sample i ?i kH (xi , x) (see Section 2). We call the
resulting algorithms FIRE for Fredholm Inverse Regularized Estimator.
Remark: Other norms and loss functions. Norms and loss functions other that L2,p can also be
used in our setting as long as they can be approximated from a sample using function evaluations.
1. Perhaps, the most interesting is L2,q norm available in the Type I setting, when a sample from
the probability distribution q is available. In fact, given a sample from both p and q we can use the
combined empirical norm ?k ? kL2,p + (1 ? ?)k ? kL2,q . Optimization using those norms leads to
some interesting kernel algorithms described in Section 2. We note that the solution is still a linear
combination of kernel functions centered on the sample from p and can still be written explicitly.
2. In Type I formulation, if the kernels k(x, y) and kH (x, y) coincide, it is possible to use the RKHS
norm k ? kH instead of L2,p . This formulation (see Section 2) also yields an explicit formula and is
related to the Kernel Mean Matching [9] , although with a different optimization procedure.
Since we are dealing with a classical inverse problem for integral operators, our formulation allows
for theoretical analysis using the methods of spectral theory. In Section 3 we present concentration
and error bounds as well as convergence rates for our algorithms when data are sampled from a
distribution defined in Rd , a domain in Rd with boundary or a compact d-dimensional sub-manifold
of a Euclidean space RN for the case of the Gaussian kernel.
In Section 4 we introduce a unsupervised method, referred as CD-CV (for cross-density crossvalidation) for model selection and discuss the experimental results on several data sets comparing
our method FIRE with the available alternatives, Kernel Mean Matching (KMM) [9] and LSIF [10]
as well as the base-line thresholded inverse kernel density estimator2 (TIKDE) and importance sampling (when available).
1
2
This could be useful in sampling procedures, when the normalizing coefficients are hard to estimate.
The standard kernel density estimator for q divided by a thresholded kernel density estimator for p.
2
We summarize the contributions of the paper as follows:
1. We provide a formulation of estimating the density ratio (importance function) as a classical
inverse problem, known as the Fredholm equation, establishing a connections to the methods of
classical analysis. The underlying idea is to ?linearize? the properties of the density by studying an
associated integral operator.
2. To solve the resulting inverse problems we apply regularization with an RKHS norm penalty. This
provides a flexible and principled framework, with a variety of different norms and regularization
techniques available. It separates the underlying inverse problem from the necessary regularization
and leads to a family of very simple and direct algorithms within the kernel learning framework in
machine learning.
3. Using the techniques of spectral analysis and concentration, we provide a detailed theoretical
analysis for the case of the Gaussian kernel, for Euclidean case as well as for distributions supported
on a sub-manifold. We prove error bounds and as well as the convergence rates.
4. We also propose a completely unsupervised technique, CD-CV, for cross-validating the parameters of our algorithm and demonstrate its usefulness, thus addressing in our setting one of the most
thorny issues in unsupervised/semi-supervised learning. We evaluate and compare our methods on
several different data sets and in various settings and demonstrate strong performance and better
computational efficiency compared to the alternatives.
Related work. Recently the problem of density ratio estimation has received significant attention
due in part to the increased interest in transfer learning [15] and, in particular to the form of transfer
learning known as covariate shift [19]. To give a brief summary, given the feature space X and the
label space Y , two probability distributions p and q on X ? Y satisfy the covariate assumption if
for all x, y, p(y|x) = q(y|x). It is easy to see that training a classifier to minimize the error for q,
(x)
given a sample from p requires estimating the ratio of the marginal distributions pqX
. The work on
X (x)
covariate shift, density ratio estimation and related settings includes [27, 2, 6, 10, 22, 9, 23, 14, 7].
The algorithm most closely related to ours is Kernel Mean Matching [9]. It is based on the equation:
Eq (?(x)) = Ep ( pq ?(x)), where ? is the feature map corresponding to an RKHS H. It is rewritten
q(x)
? arg min??L2 ,?(x)>0,Ep (?)=1 kEq (?(x)) ? Ep (?(x)?(x))kH .
as an optimization problem p(x)
The quantity on the right can be estimated given a sample from p and a sample from q and the
minimization becomes a quadratic optimization problem over the values of ? at the points sampled
from p. Writing down the feature map explicitly, i.e., recalling that ?(x) = KH (x, ?), we see that
the equality Eq (?(x)) = Ep ( pq ?(x)) is equivalent to the integral equation Eq. 2 considered as an
identity in the Hilbert space H. Thus the problem of KMM can be viewed within our setting Type I
(see the Remark 2 in the introduction), with a RKHS norm but a different optimization algorithm.
However, while the KMM optimization problem uses the RKHS norm, the weight function ? itself
is not in the RKHS. Thus, unlike most other algorithms in the RKHS framework (in particular,
FIRE), the empirical optimization problem does not have a natural out-of-sample extension. Also,
since there is no regularizing term, the problem is less stable (see Section 4 for some experimental
comparisons) and the theoretical analysis is harder (however, see [6] and the recent paper [26] for
some nice theoretical analysis of KMM in certain settings).
Another related recent algorithm is Least Squares Importance Sampling (LSIF) [10], which attempts
to estimate the density ratio by choosing a parametric linear family of functions and choosing a
function from this family to minimize the L2,p distance to the density ratio. A similar setting with
the Kullback-Leibler distance (KLIEP) was proposed in [23]. This has an advantage of a natural
out-of-sample extension property. We note that our method for unsupervised parameter selection in
Section 4 is related to their ideas. However, in our case the set of test functions does not need to
form a good basis since no approximation is required.
We note that our methods are closely related to a large body of work on kernel methods in machine
learning and statistical estimation (e.g., [21, 17, 16]). Many of these algorithms can be interpreted
as inverse problems, e.g., [3, 20] in the Tikhonov regularization or other regularization frameworks.
In particular, we note interesting methods for density estimation proposed in [12] and estimating the
support of density through spectral regularization in [4], as well as robust density estimation using
RKHS formulations [11] and conditional density [8]. We also note the connections of the methods
in this paper to properties of density-dependent operators in classification and clustering [25, 18, 1].
Among those works that provide theoretical analysis of algorithms for estimating density ratios,
3
[14] establishes minimax rates for likelihood ratio estimation. Another recent theoretical analysis of
KMM in [26] contains bounds for the output of the corresponding integral operators.
2
Settings and Algorithms
Settings and objects. We start by introducing objects and function spaces important for our development. As usual, the norm in space
functions with respect to a measure
R of square-integrable
?, is defined as follows: L2,? = f : ? |f (x)|2Rd? < ? . This is a Hilbert space with the inner
product defined in the usual way
R by hf, gi2,? = ? f (x)g(x)d?. Given a kernel k(x, y) we define
the operator K? : K? f (y) := ? k(x, y)f (x)d?(x). We will use the notation Kt,? to explicitly refer
to the parameter of the kernel function kt (x, y), when it is a ?-family. If the function k(x, y) is
symmetric and positive definite, then there is a corresponding Reproducing Kernel Hilbert space
(RKHS) H. We recall the key property of the kernel kH : for any f ? H, hf, kH (x, ?)iH = f (x).
The Representer Theorem allows us to write solutions to various optimization problems over H in
terms of linear combinations of kernels supported on sample points (see [21] for an in-depth discussion or the RKHS theory and the issues related to learning). Given a sample x1 , . . .P
, xn from p, one
can approximate the L2,p norm of a sufficiently smooth functionf by kf k22,p ? n1 i |f (xi )|2 , and
P
similarly, the integral operator Kp f (x) ? n1 i k(xi , x)f (xi ). These approximate equalities can
be made precise by using appropriate concentration inequalities.
The FIRE Algorithms. As discussed in the introduction, the starting point for our development is
the two integral equalities,
Z
Z
q(y)
q
q(y)
q
dp(y) = Kq 1(?) [II]:Kt,p (?) = kt (?, y)
dp(y) = q(?) + o(1)
[I]: Kp (?) = k(?, y)
p
p(y)
p
p(y)
(4)
Notice that in the Type I setting, the kernel does not have to be in a ?-family. For example, a linear
kernel is admissible. Type II setting comes from the fact Kt,q f (x) ? f (x)p(x) + O(t) for a ??function-like? kernel and we keep t in the notation in that case. Assuming that either Kq 1 or q are
(approximately) known (Type I and II settings, respectively) equalities in Eqs. 4 become integral
equations for pq , known as Fredholm equations of the first kind. To estimate pq , we need to obtain
an approximation to the solution which (a) can be obtained computationally from sampled data, (b)
is stable with respect to sampling and other perturbation of the input function, (c) can be analyzed
using the standard machinery of functional analysis.
To provide a framework for solving these inverse problems, we apply the classical techniques of
regularization combined with the RKHS norm popular in machine learning. In particular a simple
formulation of Type I using Tikhonov regularization, ([5], Ch. 5), with the L2,p norm is as follows:
[Type I]:
f?I = arg min kKp f ? Kq 1k22,p + ?kf k2H
f ?H
(5)
Here H is an appropriate Reproducing Kernel Hilbert Space. Similarly Type II can be solved by
[Type II]:
f?II = arg min kKt,p f ? qk22,p + ?kf k2H
f ?H
(6)
We will now discuss the empirical versions of these equations and the resulting algorithms.
Type I setting. Algorithm for L2,p norm. Given an iid sample from p, z p = {xi }ni=1 and
an iid sample from q, z q = {x0j }m
combined sample), we can approximate the
j=1 (z for the P
integral operators Kp and Kq by Kzp f (x) = n1 xi ?zp k(xi , x)f (xi ) and Kzq f (x) =
P
1
0
0
x0i ?z q k(xi , x)f (xi ). Thus the empirical version of Eq. 5 becomes
m
1 X
I
((Kzp f )(xi ) ? (Kzq 1)(xi ))2 + ?kf k2H
(7)
f?,z
= arg min
f ?H n
x ?z
i
p
The first term of the optimization problem involves only evaluations of the function f at the points
of the sample. From Representer Theorem and matrix manipulation, we obtain the following:
X
?1
I
2
f?,z
(x) =
kH (xi , x)vi and v = Kp,p
KH + n?I
Kp,p Kp,q 1zq .
(8)
xi ?z p
where the kernel matrices are defined as follows: (Kp,p )ij = n1 k(xi , xj ), (KH )ij = kH (xi , xj ) for
1
xi , xj ? z p and Kp,q is defined as (Kp,q )ij = m
k(xi , x0j ) for xi ? z p and x0j ? z q .
4
If KH and Kp,p are the same kernel we simply have: v =
1
n
3
Kp,p
+ ?I
?1
Kp,p Kp,q 1zq .
Algorithms for ?L2,p + (1 ? ?)L2,q norm. Depending on the setting, we may want to minimize the
error of the estimate over the probability distribution p, q or over some linear combination of these.
A significant potential benefit of using a linear combination is that both samples can be used at the
same time in the loss function. First we state the continuous version of the problem:
f?* = arg min ?kKp f ? Kq 1k22,p + (1 ? ?)kKp f ? Kq 1k22,q + ?kf k2H
f ?H
(9)
Given a sample from p, z p = {x1 , x2 , . . . , xn } and a sample from q, z q = {x01 , x02 , . . . , x0m } we
?
obtain an empirical version of the Eq. 9: f?,z
(x) =
X
2
2 1 ? ? X
?
arg min
(Kzp f )(x0i ) ? (Kzq 1)(x0i ) + ?kf k2H
Kzp f (xi ) ? Kzq 1(xpi ) +
f ?H n
m 0
x ?z
i
xi ?z q
p
?
f?,z
(x)
?1
P
From the Representer Theorem
= xi ?zp vi kH (xi , x)
v = (K + n?I) K1 1zq
?
1?? T
?
1?? T
K=
(Kp,p )2 +
Kq,p Kq,p KH and K1 =
Kp,p Kp,q +
Kq,p Kq,q
n
m
n
m
1
where (Kp,p )ij = n1 k(xi , xj ), (KH )ij = kH (xi , xj ) for xi , xj ? z p , and (Kp,q )ij = m
k(xi , x0j )
1
0
0
and (Kq,p )ji = n k(xj , xi ) for xi ? z p ,xj ? z q . Despite the loss function combining both samples,
the solution is still a summation of kernels over the points in the sample from p.
Algorithms for the RKHS norm. In addition to using the RKHS norm for regularization norm, we
can also use it as a loss function: f?* = arg minf ?H kKp f ? Kq 1k2H0 + ?kf k2H Here the Hilbert
space H0 must correspond to the kernel k and can potentially be different from the space H used for
regularization. Note that this formulation is only applicable in the Type I setting since it requires
the function q to belong to the RKHS H0 . Given two samples z p , z q , it is easy to write down the
empirical version of this problem, leading to the following formula:
X
?1
?
f?,z
(x) =
vi kH (xi , x)
v = (Kp,p KH + n?I) Kp,q 1zq .
(10)
xi ?z p
The result is somewhat similar to our Type I formulation with the L2,p norm. We note the connection
between this formulation of using the RKHS norm as a loss function and the KMM algorithm [9].
When the kernels K and KH are the same, Eq. 10 can be viewed as a regularized version of KMM
(with a different optimization procedure).
Type II setting. In Type II setting we assume that we have a sample z = {xi }ni=1 drawn from p
and that we know the function values q(xi ) at the points of the sample. Replacing the norm and the
integral operator with their empirical versions, we obtain the following optimization problem:
1 X
II
f?,z
= arg min
(Kt,zp f (xi ) ? q(xi ))2 + ?kf k2H
(11)
f ?H n
x ?z
i
As before, using the Representer Theorem we obtain an analytical formula for the solution:
X
?1
II
f?,z
(x) =
kH (xi , x)vi where v = K 2 KH + n?I
Kq.
xi ?z
where the kernel matrix K is defined by Kij =
1
n kt (xi , xj ),
(KH )ij = kH (xi , xj ) and qi = q(xi ).
Comparison of type I and type II settings.
1. In Type II setting q does not have to be a density function (i.e., non-negative and integrate to one).
2. Eq. 7 of the Type I setting cannot be easily solved in the absence of a sample z q from q, since estimating Kq requires either sampling from q (if it is a density) or estimating the integral in some other
way, which may be difficult in high dimension but perhaps of interest in certain low-dimensional
application domains.
3. There are a number of problems (e.g., many problems involving MCMC) where q(x) is known
explicitly (possibly up to a multiplicative constant), while sampling from q is expensive or even
impossible computationally [13].
4. Unlike Eq. 5, Eq. 6 has an error term depending on the kernel. For example, in the important case
of the Gaussian kernel, the error is of the order O(t), where t is the variance of Gaussian.
5. Several norms are available in the Type I setting, but only the L2,p norm is available for Type II.
5
3
Theoretical analysis: bounds and convergence rates for Gaussian Kernels
In this section, we state our main results on bounds and convergence rates for our algorithm based
on Tikhonov regularization with a Gaussian kernel. We consider both Type I and Type II settings
for the Euclidean and manifold cases and make a remark on the Euclidean domains with boundary.
To simplify the theoretical development, the integral operator and the RKHS H will correspond to
the same Gaussian kernel kt (x, y). The proofs will be found in the supplemental material.
Assumptions: The set ?, where the density function p is defined, could be one of the following: (1)
the whole Rd ; (2) a compact smooth Riemannian sub-manifold M of d-dimension in Rn . We also
need p(x) < ?, q(x) < ? for any x ? ? and that pq , pq2 are in Sobolev space W22 (?).
Theorem 1. ( Type I setting.) Let p and q be two density functions on ?. Given n points, z p =
{x1 , x2 , . . . , xn }, i.i.d. sampled from p and m points, z q = {x01 , x02 , . . . , x0m }, i.i.d. sampled from
q, and for small enough t, for the solution to the optimization problem in (7), with confidence at
least 1 ? 2e?? , we have
(1) If the domain ? is Rd , for some constants C1 , C2 , C3 independent of t, ?.
?
I
1
1
f?,z ? q
? C1 t + C2 ? 12 + C3 ?
?
?
+
p
2,p
m ?1/6 n
?td/2
(12)
(2) If the domain ? is a compact sub-manifold without boundary of d dimension, for some 0 < ? < 1,
C1 , C2 , C3 independent of t, ?.
?
I
1
1
f?,z ? q
? C1 t1?? + C2 ? 21 + C3 ?
?
?
+
(13)
p
2,p
m ?1/6 n
?td/2
Corollary 2. ( Type I setting.) Assuming m > ?1/3 n, with confidence at least 1 ? 2e?? , when
(1) ? = Rd , (2) ? is a d-dimensional sub-manifold of a Euclidean space, we have
2
2
?
?
I
I
1
1
q
q
? 3.5+d/2
f
?
(1)
=
O
f
?
?
n
(2)
? n? 3.5(1??)+d/2 ?? ? (0, 1)
?,z p
?,z p
= O
2,p
2,p
Theorem 3. ( Type II setting.) Let p be a density function on ? and q be a function satisfying the
assumptions. Given n points z = {x1 , x2 , . . . , xn } sampled i.i.d. from p, and for sufficiently small
t, for the solution to the optimization problem in (11), with confidence at least 1 ? 2e?? , we have
(1) If the domain ? is Rd ,
?
II
?
f?,z ? q
?C1 t + C2 ? 21 + C3 ?? 13 kKt,q 1 ? qk + C4
? ,
2,p
p
2,p
?3/2 td/2 n
(14)
where C1 , C2 , C3 , C4 are constants independent of t, ?. Moreover, kKt,q 1 ? qk2,p = O(t).
(2) If ? is a d-dimensional sub-manifold of a Euclidean space, for any 0 < ? < 1
?
II
?
f?,z ? q
?C1 t1?? + C2 ? 21 + C3 ?? 13 kKt,q 1 ? qk + C4
? ,
2,p
3/2
d/2
p 2,p
? t
n
(15)
where C1 , C2 , C3 , C4 are independent of t, ?. Moreover, kKt,q 1 ? qk2,p = O(t1?? ), ?? > 0.
Corollary 4. ( Type II setting.) With confidence at least 1 ? 2e?? , when
(1) ? = Rd , (2) ? is a d-dimensional sub-manifold of a Euclidean space, we have
2
2
1??
II
II
? ? 4+15 d
? ? 4?4?+
q
q
5d
6
6
(1)
f
?
=
O
?
n
(2)
f
?
=
O
?
n
?? ? (0, 1)
?,z p
?,z p
2,p
2,p
4
Model Selection and Experiments
We describe an unsupervised technique for parameter selection, Cross-Density Cross-Validation
(CD-CV) based on a performance measure unique to our setting. We proceed to evaluate our method.
The setting. In our experiments, we have X p = {xp1 , . . . , xpn } and X q = {xq1 , . . . , xqm }. The
goal is to estimate pq , assuming that X p , X q are i.i.d. sampled from p, q respectively. Note that
6
learning pq is unsupervised and our algorithms typically have two parameters: the kernel width t and
regularization parameter ?.
Performance Measures and CD-CV Model Selection. We describe a set of performance measures
used for parameter selection. For agiven function
u, we have the following importance sampling
equality (Eq. 1): Ep (u(x)) = Eq u(x) p(x)
.
If
f
is an approximation of the true ratio pq , and
q(x)
X p , X q are samples
from p, q respectively,
have the following approximation to the previous
Pn
Pm we will
q
1
equation: n1 i=1 u(xpi )f (xpi ) ? m
j=1 u(xj ). So after obtaining an estimate f of the ratio, we
can validate it using the following performance measure:
?
?2
F
n
m
X
X
X
1
?
JCD (f ; X p , X q , U ) =
ul (xpi )f (xpi ) ?
ul (xqj )?
(16)
F
i=1
j=1
l=1
where U = {u1 , . . . , uF } is a collection of test functions. Using this performance measure allows
various cross-validation procedures to be used for parameter selection. We note that this way to
measure error is related to the LSIF [10] and KLIEP [23] algorithms. However, there a similar
measure is used to construct an approximation to the ratio pq using functions u1 , . . . , uF as a basis.
In our setting, we can use test functions (e.g., linear functions) which are poorly suited as a basis for
approximating the density ratio.
We will use the following two families of test functions for parameter selection: (1) Sets of random linear functions ui (x) = ? T x where ? ? N (0, Id); (2) Sets of random half-space indicator
functions, ui (x) = 1? T x>0 .
Procedures for parameter selection. The performance is optimized using five-fold cross-validation
p
q
q
p
by splitting the data set into two parts Xtrain
for validation.
and Xtrain
and Xcv
for training and Xcv
9
The range we use for kernel width t is (t0 , 2t0 , . . . , 2 t0 ), where t0 is the average distance of the 10
nearest neighbors. The range for regularization parameter ? is (1e ? 5, 1e ? 6, . . . , 1e ? 10).
Data sets and Resampling We use two datasets, CPUsmall and Kin8nm, for regression; and USPS
handwritten digits for classification. And we draw the first 500 or 1000 points from the original
data set as X p . To obtain X q , the following two ways of resampling, using the features or the label
information, are used (along the lines of those in [6]).
Given a set of data with labels {(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn )} and denoting Pi the probability of
i?th instance being chosen, we resample as follows:
e(ahxi ,e1 i?b)/?v
(1) Resampling using features (labels yi are not used). Pi = 1+e
(ahxi ,e1 i?b)/?v , where a, b are the
resampling parameters, e1 is the first principal component, and ?v is the standard deviation of the
projections to e1 . This resampling method
will be denoted by PCA(a, b).
1 yi ? Lq
(2) Resampling using labels. Pi =
where yi ? L = {1, 2, . . . , k} and Lq is a
0 Otherwise.
subset of the whole label set L. It only applies to binary problems obtained by aggregating different
classes in multi-class setting.
Testing the FIRE algorithm. In the first experiment, we test our method for selecting parameters
by focusing on the error JCD (f ; X p , X q , U ) in Eq. 16 for different function classes U . Parameters
are chosen using a family of functions U1 , while the performance of the parameter is measured using
an independent function family U2 . This measure is important because in practice the functions we
are interested in may not be the ones chosen for validation.
We use the USPS data sets for this experiment. As a basis for comparison we use TIKDE (Thresholded Inverse Kernel Density Estimator). TIKDE estimates p? and q? respectively using Kernel Density Estimation (KDE), and assigns p?(x) = ? to any x satisfying p?(x) < ?. TIKDE then outputs
q?/?
p. We note that chosen threshold ? is key to reasonable performance. One issue of this heuristic
is that it could underestimate at the region with high density ratio, due to the uniform thresholding.
We also compare our methods to LSIF [10]. In these experiments we do not compare with KMM as
out-of-sample extension is necessary for fair comparison.
Table 1 shows the average errors of various methods, defined in Eq. 16 on held-out set X err over 5
trials. We use different validation functions f cv (Columns) and error-measuring functions f err (Row).
N is the number of random functions used for validation. The error-measuring function families U2
are as follows: (1) Linear(L.): random linear functions f (x) = ? T x where ? ? N (0, Id); (2) Half7
space(H.S.): Sets of random half-space indicator functions, f (x) = 1? T x ; (3) Kernel(K.): random
linear combinations of kernel functions centered at training data, f (x) = ? T K where ? ? N (0, Id)
and Kij = k(xi , xj ) for xi from training set; (4) Kernel indicator(K.I.) functions f (x) = 1g(x)>0 ,
where g is as in (3).
Table 1: USPS data set with resampling using PCA(5, ?v ) with |X p | = 500, |X q | = 1371. Around
400 in X p and 700 in X q are used in 5-fold CV, the rest are held-out for computing the error.
Linear
N
Half-Spaces
Linear
50
200
50
200
N
L.
TIKDE
LSIF
FIREp
FIREp,q
FIREq
10.9
14.1
3.6
4.7
5.9
10.9
14.1
3.7
4.7
6.2
10.9
26.8
5.5
7.4
9.3
10.9
28.2
6.3
6.8
9.3
H.S.
TIKDE
LSIF
FIREp
FIREp,q
FIREq
2.6
3.9
1.0
0.9
1.2
2.6
3.9
0.9
1.0
1.4
2.6
3.7
1.0
1.4
1.6
2.6
3.9
1.2
1.1
1.6
Half-Spaces
50
200
50
200
K.
TIKDE
LSIF
FIREp
FIREp,q
FIREq
4.7
16.1
1.2
2.1
5.2
4.7
16.1
1.1
2.0
4.3
4.7
15.6
2.8
4.2
6.1
4.7
13.8
3.6
2.6
6.1
K.I.
TIKDE
LSIF
FIREp
FIREp,q
FIREq
4.2
4.4
0.9
0.6
1.2
4.2
4.4
0.7
0.6
0.9
4.2
5.3
1.2
1.9
2.2
4.2
4.4
1.1
1.1
2.2
Supervised Learning: Regression and Classification. We compare our FIRE algorithm with several other methods in regression and classification tasks. We consider the situation where part of
the data set X p are labeled and all of X q are unlabeled. We use weighted ordinary least-square for
regression and weighted linear SVM for classification.
Regression. Square loss function is used for regression. The performance is measured using norPn yi ?yi )2
q
malized square loss, i=1 (?
Var(?
y ?y) . X is resampled using PCA resampler, described before. L is
for Linear, and HS is for Half-Space function families for parameter selection.
Table 2: Mean normalized square loss on the CPUsmall and Kin8nm. |X p | = 1000, |X q | = 2000.
CPUsmall, resampled by PCA(5, ?v ) Kin8nm, resampled by PCA(1, ?v )
No. of Labeled
100
200
500
100
200
500
Weights
OLS
TIKDE
KMM
LSIF
FIREp
FIREp,q
FIREq
L
HS
L
.74
.38
1.86
.39
.33
.33
.32
HS
L
HS
.50
.36
1.86
.39
.33
.33
.33
.30
1.9
.31
.29
.29
.28
L
HS
.83
.29
1.9
.31
.29
.29
.29
.28
2.5
.33
.27
.27
.27
L
.59
.28
2.5
.33
.27
.27
.27
.57
.58
.57
.57
.56
.56
HS
L
.55
.57
.58
.56
.56
.56
.56
.55
.55
.54
.55
.55
.55
.55
.55
.54
.54
.54
.54
HS
0.54
.53
.52
.52
.52
.52
.52
.53
.52
.52
.52
.52
.52
Classification. Weighted linear SVM. Percentage of incorrectly labeled test set instances.
Table 3: Average error on USPS with +1 class= {0 ? 4}, ?1 class= {5 ? 9} and |X p | = 1000
and |X q | = 2000. Left half of the table uses resampling PCA(5, ?v ), where ?v . Right half shows
resampling based on Label information.
PCA(5, ?v )
L = {{0 ? 4}, {5 ? 9}},L0 = {0, 1, 5, 6}
No. of Labeled
100
200
500
100
200
500
Weights
SVM
TIKDE
KMM
LSIF
FIREp
FIREp,q
FIREq
L
HS
L
10.2
9.4
8.1
9.5
8.9
7.0
5.5
9.4
8.1
10.2
6.8
7.0
7.3
HS
L
8.1
7.2
5.9
7.3
5.3
5.1
4.8
7.2
5.9
8.1
5.0
5.1
5.4
HS
5.7
4.9
4.7
5.0
4.1
4.1
4.1
4.9
4.7
5.7
4.1
4.1
4.4
L
HS
18.6
18.5
17.5
18.5
17.9
18.0
18.3
18.5
17.5
18.5
18.4
18.5
18.4
L
HS
L
16.4
16.4
13.5
16.2
16.1
16.1
16.0
16.4
13.5
16.3
16.1
16.2
16.2
HS
12.9
12.4
10.3
12.2
11.5
11.6
11.8
12.4
10.3
12.2
12.0
12.0
12.0
Acknowledgements. The work was partially supported by NSF Grants IIS 0643916, IIS 1117707.
8
References
[1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for
learning from labeled and unlabeled examples. JMLR, 7:2399?2434, 2006.
[2] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning for differing training and test
distributions. In ICML, 2007.
[3] E. De Vito, L. Rosasco, A. Caponnetto, U. De Giovannini, and F. Odone. Learning from
examples as an inverse problem. JMLR, 6:883, 2006.
[4] E. De Vito, L. Rosasco, and A. Toigo. Spectral regularization for support estimation. In NIPS,
pages 487?495, 2010.
[5] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems. Springer, 1996.
[6] A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borgwardt, and B. Sch?olkopf. Covariate
shift by kernel mean matching. Dataset shift in machine learning, pages 131?160, 2009.
[7] S. Gr?unew?alder, A. Gretton, and J. Shawe-Taylor. Smooth operators. In ICML, 2013.
[8] S. Gr?unew?alder, G. Lever, L. Baldassarre, S. Patterson, A. Gretton, and M. Pontil. Conditional
mean embeddings as regressors. In ICML, 2012.
[9] J. Huang, A. Gretton, K. M. Borgwardt, B. Sch?olkopf, and A. Smola. Correcting sample
selection bias by unlabeled data. In NIPS, pages 601?608, 2006.
[10] T. Kanamori, S. Hido, and M. Sugiyama. A least-squares approach to direct importance estimation. JMLR, 10:1391?1445, 2009.
[11] J. S. Kim and C. Scott. Robust kernel density estimation. In ICASSP, pages 3381?3384, 2008.
[12] S. Mukherjee and V. Vapnik. Support vector method for multivariate density estimation. In
Center for Biological and Computational Learning. Department of Brain and Cognitive Sciences, MIT. CBCL, volume 170, 1999.
[13] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[14] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the
likelihood ratio by penalized convex risk minimization. NIPS, 20:1089?1096, 2008.
[15] S. J. Pan and Q. Yang. A survey on transfer learning. Knowledge and Data Engineering, IEEE
Transactions on, 22(10):1345?1359, 2010.
[16] B. Sch?olkopf and A. J. Smola. Learning with kernels: Support vector machines, regularization,
optimization, and beyond. MIT press, 2001.
[17] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge university
press, 2004.
[18] T. Shi, M. Belkin, and B. Yu. Data spectroscopy: Eigenspaces of convolution operators and
clustering. The Annals of Statistics, 37(6B):3960?3984, 2009.
[19] H. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[20] A. J. Smola and B. Sch?olkopf. On a kernel-based method for pattern recognition, regression,
approximation, and operator inversion. Algorithmica, 22(1):211?231, 1998.
[21] I. Steinwart and A. Christmann. Support vector machines. Springer, 2008.
[22] M. Sugiyama, M. Krauledat, and K. M?uller. Covariate shift adaptation by importance weighted
cross validation. JMLR, 8:985?1005, 2007.
[23] Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul Von Buenau, and Motoaki
Kawanabe. Direct importance estimation with model selection and its application to covariate
shift adaptation. NIPS, 20:1433?1440, 2008.
[24] A. Tsybakov. Introduction to nonparametric estimation. Springer, 2009.
[25] C. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers. In ICML, 2000.
[26] Y. Yu and C. Szepesv?ari. Analysis of kernel mean matching under covariate shift. In ICML,
2012.
[27] B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In ICML, 2004.
9
| 4897 |@word h:13 trial:1 version:7 inversion:1 norm:26 harder:1 contains:1 selecting:1 denoting:1 rkhs:16 ours:1 err:2 comparing:1 dx:5 written:1 must:1 malized:1 resampling:9 half:7 provides:1 cse:1 five:1 along:1 c2:8 direct:3 become:1 prove:1 introduce:1 theoretically:1 planning:1 multi:1 brain:1 kpp:1 td:3 encouraging:1 becomes:4 estimating:11 underlying:3 notation:2 moreover:2 kind:3 interpreted:1 supplemental:1 differing:1 masashi:1 classifier:3 x0m:2 grant:1 yn:1 positive:1 before:2 engineering:2 local:1 t1:3 aggregating:1 despite:1 k2l2:2 analyzing:1 id:3 establishing:1 approximately:1 qichao:1 range:2 unique:1 testing:1 practice:1 implement:1 definite:1 digit:1 procedure:5 pontil:1 empirical:7 matching:5 projection:1 confidence:4 get:1 cannot:1 unlabeled:3 selection:16 operator:16 context:1 impossible:1 writing:1 risk:1 equivalent:1 map:2 center:1 shi:1 annealed:1 williams:1 attention:1 starting:1 convex:1 survey:1 splitting:1 assigns:1 correcting:1 estimator:5 importantly:1 annals:1 suppose:1 us:2 approximated:4 expensive:1 satisfying:2 recognition:1 mukherjee:1 labeled:5 ep:6 solved:2 region:1 principled:2 ui:2 cristianini:1 vito:2 solving:2 predictive:1 patterson:1 efficiency:1 completely:2 lsif:10 basis:4 usps:4 easily:2 icassp:1 various:5 describe:2 kp:22 choosing:2 que:2 h0:2 odone:1 heuristic:1 solve:1 loglikelihood:1 otherwise:1 niyogi:1 statistic:2 itself:1 advantage:1 analytical:1 propose:1 product:1 adaptation:2 combining:1 poorly:1 kh:24 validate:1 olkopf:4 crossvalidation:1 convergence:5 zp:3 object:2 depending:2 develop:1 linearize:1 measured:2 nearest:1 ij:7 x0i:3 received:1 eq:15 strong:1 involves:1 come:1 christmann:1 motoaki:1 closely:3 unew:2 gi2:1 centered:2 material:1 biological:1 summation:1 extension:3 hold:1 sufficiently:2 considered:1 around:1 cbcl:1 k2h:9 bickel:1 resample:1 estimation:17 baldassarre:1 applicable:1 label:7 establishes:1 tool:2 weighted:4 minimization:2 uller:1 mit:2 gaussian:10 pn:1 corollary:2 l0:1 likelihood:2 seeger:1 mbelkin:1 kim:1 inference:5 dependent:1 typically:1 xqm:1 interested:1 arg:10 classification:7 flexible:3 issue:3 among:1 denoted:1 development:3 integration:1 marginal:1 construct:1 sampling:10 yu:2 unsupervised:8 icml:6 representer:5 thinking:1 minf:1 simplify:1 belkin:3 divergence:1 algorithmica:1 fire:7 n1:8 attempt:2 recalling:1 interest:2 evaluation:3 analyzed:1 held:2 kt:14 integral:17 pq2:1 buenau:1 necessary:2 eigenspaces:1 machinery:1 euclidean:8 taylor:2 theoretical:10 increased:1 kij:2 instance:2 column:1 measuring:2 engl:1 ordinary:1 introducing:1 deviation:1 subset:1 addressing:1 cpusmall:3 uniform:1 usefulness:1 kq:16 gr:2 kkp:5 combined:4 density:42 borgwardt:2 xpi:5 qk2:2 von:1 lever:1 rosasco:2 possibly:1 huang:2 cognitive:1 leading:1 potential:1 de:3 hisashi:1 includes:1 coefficient:1 satisfy:1 explicitly:4 vi:4 multiplicative:1 start:2 hf:2 hanke:1 contribution:1 minimize:3 square:7 ni:2 variance:1 qk:2 yield:1 correspond:2 handwritten:1 iid:2 fredholm:8 underestimate:1 kl2:2 naturally:1 associated:1 proof:1 riemannian:1 sampled:9 dataset:1 popular:2 recall:1 knowledge:1 hilbert:8 thorny:1 focusing:1 supervised:3 formulation:11 smola:4 hand:2 steinwart:1 replacing:2 perhaps:2 effect:1 k22:4 normalized:1 true:1 y2:1 regularization:20 equality:7 reformulating:2 symmetric:1 leibler:1 neal:1 width:2 alder:2 demonstrate:2 ohio:2 recently:2 ari:1 kin8nm:3 ols:1 functional:1 ji:1 volume:1 discussed:1 belong:1 jcd:2 significant:3 refer:1 cambridge:1 cv:7 smoothness:1 rd:11 pm:1 similarly:2 sugiyama:3 shawe:2 pq:10 stable:2 yk2:1 base:1 multivariate:1 recent:3 manipulation:1 tikhonov:3 certain:2 inequality:1 binary:1 yi:5 integrable:1 somewhat:1 x02:2 semi:2 ii:24 gretton:4 caponnetto:1 smooth:4 cross:9 long:1 divided:1 e1:4 hido:1 uckner:1 qi:1 involving:1 regression:7 kernel:58 nakajima:1 c1:8 addition:1 want:1 szepesv:1 sch:4 rest:1 unlike:2 validating:1 jordan:1 call:2 xp1:1 yang:1 easy:3 enough:1 embeddings:1 variety:1 xj:12 inner:1 idea:3 knowing:1 br:1 shift:11 t0:4 pca:7 ul:2 penalty:2 proceed:1 remark:3 krauledat:1 generally:2 useful:3 detailed:2 involve:1 xtrain:2 amount:1 nonparametric:1 tsybakov:1 percentage:1 nsf:1 notice:1 estimated:3 discrete:1 write:2 key:2 threshold:1 drawn:1 thresholded:3 sum:1 inverse:16 family:11 x0j:4 reasonable:1 sobolev:1 draw:1 dy:2 bound:6 resampled:3 schmittfull:1 fold:2 quadratic:1 w22:1 x2:4 fourier:1 qk22:1 u1:3 min:9 formulating:1 uf:2 department:2 combination:6 shimodaira:1 pan:1 kmm:10 neubauer:1 taken:1 computationally:2 equation:12 turn:1 discus:2 mechanism:1 needed:1 know:1 xq1:1 toigo:1 studying:1 available:10 rewritten:1 apply:3 kawanabe:1 spectral:5 appropriate:2 kashima:1 alternative:2 original:1 clustering:2 k1:2 approximating:2 classical:6 quantity:2 parametric:1 concentration:4 usual:3 dp:2 distance:3 separate:1 manifold:10 assuming:5 ratio:22 difficult:2 potentially:1 xqj:1 kde:1 negative:1 unknown:1 convolution:1 datasets:1 implementable:1 incorrectly:1 zadrozny:1 situation:1 kliep:2 precise:1 y1:1 rn:2 perturbation:1 reproducing:5 shinichi:1 arbitrary:1 keq:1 required:1 c3:8 connection:4 optimized:1 c4:4 nip:4 address:3 beyond:1 pattern:2 scott:1 giovannini:1 summarize:1 including:2 wainwright:1 natural:2 regularized:3 indicator:3 representing:1 minimax:1 brief:1 xcv:2 nice:1 geometric:1 l2:14 acknowledgement:1 kf:11 loss:9 interesting:3 var:1 validation:9 integrate:2 x01:2 thresholding:1 pi:3 cd:5 row:1 course:1 summary:1 penalized:1 supported:3 kanamori:1 bias:3 side:2 neighbor:1 mikhail:1 benefit:1 boundary:3 depth:1 xn:6 dimension:3 evaluating:1 made:1 collection:1 coincide:1 regressors:1 nguyen:1 transaction:1 functionals:1 approximate:4 compact:3 kullback:1 keep:1 dealing:1 kkt:6 xi:45 discriminative:1 continuous:1 zq:4 table:5 transfer:4 robust:2 obtaining:1 spectroscopy:1 improving:1 constructing:1 domain:6 main:1 whole:2 paul:1 fair:1 x1:6 body:1 referred:2 scheffer:1 sub:8 explicit:2 lq:2 jmlr:4 weighting:1 admissible:1 formula:3 theorem:7 down:2 covariate:11 svm:3 normalizing:1 ih:1 vapnik:1 importance:11 push:1 kx:1 easier:1 suited:1 simply:1 partially:1 u2:2 sindhwani:1 applies:1 springer:3 ch:2 satisfies:1 conditional:2 identity:1 viewed:2 goal:1 absence:1 hard:1 principal:1 called:1 experimental:3 support:5 arises:1 evaluate:2 mcmc:1 regularizing:1 |
4,306 | 4,898 | Regression-tree Tuning in a Streaming Setting
Samory Kpotufe?
Toyota Technological Institute at Chicago?
[email protected]
Francesco Orabona?
Toyota Technological Institute at Chicago
[email protected]
Abstract
We consider the problem of maintaining the data-structures of a partition-based
regression procedure in a setting where the training data arrives sequentially over
time. We prove that it is possible to maintain such a structure in time O (log n) at
? n?2/(2+d)
any time step n while achieving a nearly-optimal regression rate of O
in terms of the unknown metric dimension d. Finally we prove a new regression
lower-bound which is independent of a given data size, and hence is more appropriate for the streaming setting.
1
Introduction
Traditional nonparametric regression such as kernel or k-NN can be expensive to estimate given
modern large training data sizes. It is therefore common to resort to cheaper methods such as treebased regression which precompute the regression estimates over a partition of the data space [7].
Given a future query x, the estimate fn (x) simply consists of finding the closest cell of the partition
by traversing an appropriate tree-structure and returning the precomputed estimate. The partition
and precomputed estimates depend on the training data and are usually maintained in batch-mode.
We are interested in maintaining such a partition and estimates in a real-world setting where the
training data arrives sequentially over time. Our constraints are that of fast-update at every time
step, while maintaining a near-minimax regression error-rate at any point in time.
The error-rate of tree-based regression is well known to depend on the size of the partition?s cells.
We will call this
size the binwidth. The minimax-optimal binwidth n is known to be of the form
O n?1/(2+d) , assuming a training data of size n from a metric space of unknown dimension d,
and unknown Lipschitz
target function f . This setting of n would then yield a minimax error rate
of O n?2/(2+d) . Thus, the dimension d is the most important problem variable entering the rate
(and the tuning of n ), while other problem variables such as the Lipschitz properties of f are less
crucial in comparison. The main focus of this work is therefore that of adapting to the unknown d
while maintaining fast partition estimates in a streaming setting.
A first idea would be to start with an initial dimension estimation phase (where the regression estimates are suboptimal), and using the estimated dimension for subsequent data in a following phase,
which leaves only the problem of maintaining partition estimates over time. However, while this
sounds reasonable, it is generally unclear when to confidently stop such an initial phase since this
would depend on the unknown d and the distribution of the data.
Our solution is to interleave dimension estimation with regression updates as the data arrives sequentially. However the algorithm never relies on the estimated dimensions and views them rather
as guesses di . Even if di 6= d, it is kept as long as it is not hurtful to regression performance. The
guess di is discarded once we detect that it hurts the regression, a new di+1 is then estimated and a
new phase i+1 is started. The decision to discard di relies on monitoring quantities that play into the
tradeoff between regression variance and bias, more precisely we monitor the size of the partition
?
?
SK and FO contributed equally to this paper.
Other affiliation: Max Planck Institute for Intelligent Systems, Germany
1
and the partition?s binwidth n . We note that the idea can be applied to other forms of regression
where other quantities control the regression variance and bias (see longer version of the paper).
1.1
Technical Overview of Results
We assume that training data (xi , Yi ) is sampled sequentially over time, xi belongs to a general
metric space X of unknown dimension d, and Yi is real. The exact setup is given in Section 2.
The algorithm (presented in Section 2.3) maintains regression estimates for all training samples
n
xn , {xt }t=1 arriving over time, while constantly updating a partition of the data and partition
binwidth. At any time t = n, all updates are proveably of order log n with constants depending on
the unknown dimension d of X .
At time t = n, the estimate for a query point x is given by the precomputed estimate for the closest
point to x in xn , which can be found fast using an off-the-shelf similarity search
structure, such as
? n?2/(2+d) , nearly optimal in
those of [2, 10]. We prove that the L2 error of the algorithm is O
terms of the unknown dimension d of the metric X .
Finally, we prove a new lower-bound for regression on a generic metric X , where the worst-case
distribution is the same as n increases. Note that traditional lower-bounds for the offline setting
derive a different worst-case distribution for each sample size n. Thus, our lower-bound is more
appropriate to the streaming setting where the data arrives over time from the same distribution.
The results are discussed in more technical details in Section 3.
1.2
Related Work
Although various interesting heuristics have been proposed for maintaining tree-based learners in
streaming settings (see e.g. [1, 5, 11, 15]), the problem has not received much theoretical attention.
This is however an important problem given the growing size of modern datasets, and given that in
many modern applications, training data is actually acquired sequentially over time and incremental
updates have to be efficient (see e.g. Robotics [12, 16], Finance [8]).
The most closely related theoretical work is that of [6] which treats the problem of tuning a local
polynomial regressor where the training data is acquired over time. Their setting however is that
of a Euclidean space where d is known (ambient Euclidean dimension). [6] is thus concerned with
maintaining a minimax error rate w.r.t. the known dimension d, while efficiently tuning regression
bandwidth.
A possible alternative to the method analyzed here is to employ some form of cross-validation or
even online solutions based on mixture of experts [3], by keeping track of different partitions, each
corresponding to some setting of the bindwidth n . This is however likely expensive to maintain in
practice if good prediction performance is desired.
2
2.1
Preliminaries
Notions of metric dimension
We consider the following notion of dimension which extends traditional notions of dimension (e.g.
Euclidean dimension and manifold dimension) to general metric spaces [4]. We assume throughout,
w.l.o.g. that the space X has diameter at most 1 under a metric ?.
Definition 1. The metric measure space (X , ?, ?) has metric measure-dimension d, if there exist
C?? , C?? such that for all > 0, and x ? X , C?? d ? ?(B(x, )) ? C?? d .
The assumption of finite metric-measure dimension ensures that the measure ? has mass everywhere
on the space ?. This assumption is a generalization (to a metric space) of common assumptions
where the measure has an upper and lower-bounded density on a compact Euclidean space, however
is more general in that it does not require the measure ? to have a density (relative to any reference
measure). The metric-measure dimension implies the following other notion of metric dimension.
Definition 2. The metric (X , ?) has metric dimension d, if there exists C?? such that, for all > 0,
X has an -cover of size at most C?? ?d .
2
The relation between the two notions of dimension is stated in the following lemma of [9], which
allows us to use either notion as needed.
Lemma 1 ([9]). If (X , ?, ?) has metric-measure dimension d, then there exists C?? , C?? such that, for
all > 0, any ball B(x, r) centered on (X , ?) has an r-cover of size in [C?? ?d , C?? ?d ].
2.2
Problem Setup
We receive data pairs (x1 , Y1 ), (x2 , Y2 ), . . . sequentially, i.i.d. The input xt belongs to a metric
measure space (X , ?, ?) of diameter at most 1, and of metric measure dimension d. The output Yt
belongs to a subset of R of bounded diameter ?Y , and satisfies Yt = f (xt ) + ?(xt ). The noise
?(xt ) has 0 mean. The unknown function f is assumed to be ?-Lipschitz w.r.t. ? for an unknown
parameter ?, that is ?x, x0 ? X , |f (x) ? f (x0 )| ? ?? (x, x0 ).
L2 error: Our main performance result bounds the excess L2 risk
2
E kfn ? f k2,? ,
xn ,Y n
E
2
E |fn (X) ? f (X)| .
xn ,Y n X
We will often also be interested in the average error on the training sample: recall that at any time t,
an estimate ft (xs ) of f is produced for every xs ? xt . The average error on xn at t = n is denoted
n
1X
2
E n En |fn (X) ? f (X)| ,
|fn (xt ) ? f (xt )| .
Y
n t=1
2
2.3
Algorithm
The procedure (Algorithm 1) works by partitioning the data into small regions of size roughly t /2 at
any time t, and computing the regression estimate of the centers of each region. All points falling in
the same region (identified by a center point) are assigned the same regression estimate: the average
Y values of all points in the region.
The procedure works in phases, where each Phase i corresponds to a guess di of the metric dimen?1/(2+di )
sion d. Where t might have been set to t?1/(2+d) if we knew d, we set it to ti
where ti is
the current time step within Phase i.
We ensure that in each phase our guess di does not hurt the variance-bias tradeoff of the estimates:
this is done by monitoring the size of the partition (|Xi | in the algorithm), which controls the variance (see analysis in Section 4), relative to the bindwidth t , which controls bias. Whenever |Xi | is
too large relative to t , the variance of the procedure is likely too large, so we start a new phase with
an new guess of di .
Since the algorithm maintains at any time n an estimate fn (xt ) for all xt ? xn , for any query point
x ? X , we simply return fn (x) = fn (xt ) where xt is the closest point to x in xn .
Despite having to adaptively tune to the unknown d, the main computation at each time step consists of just a 2-approximate nearest neighbor search for the closest center. These searches can be
done fast in time O (log n) by employing the off-the-shelf online search-procedure of [10]. This is
emphasized in Lemma 2 below.
Finally, the algorithm employs a constant C? which is assumed to upper-bound the constant C? in
Definition 2. This is a minor assumption since C? is generally taken to be small, e.g. 1, in machine
learning literature, and is exactly quantifieable for various metrics [4, 10].
3
3.1
Discussion of Results
Time complexity
The time complexity of updates is emphasized in the following Lemma.
Lemma 2. Suppose (X , ?, ?) has metric dimension d. Then there exist C depending on d such that
all computations of the algorithm at any time t = n can be done in time C log n.
3
Algorithm 1 Incremental tree-based regressor.
1: Initialize: i = 1, di = 1, ti = 0, Centers Xi = ?
2: for t = 1, 2, . . . , T do
3:
Receive (xt , yt )
4:
ti ? ti + 1 // counts the time steps within Phase i
?1/(2+di )
5:
t ? ti
6:
Set xs ? Xi to the 2-approximate nearest neighbor of xt
7:
if ? (xt , xs ) ? t then
8:
Assign xt to xs
9:
fn (xs ) ? update average Y for center xs with yt
10:
For every r ? t assigned to xs , fn (xr ) = fn (xs )
11:
else
i
? 4di ?d
12:
if |Xi | + 1 > C
then
t
13:
// Start oflPhase i + 1
m
14:
di+1 ? log( |XiC?|+1 )/ log(4/t )
15:
i?i+1
16:
end if
17:
Add xt as a new center in Xi
18:
end if
19: end for
Figure 1: As t varies over
time, a ball around a center xs can eventually contain
both points assigned to xs
and points non-assigned to it,
and even contain other centers. This results in a complex
partitioning of the data.
Proof. The main computation at time n consists of finding the 2-approximate nearest neighbor xn in
Xi and update the data structure for the nearest neighbor search. These centers are all at least 2n farapart. Using the results of [10], this can be done online in time O (log(1/n ) + log log(1/n )).
3.2
Convergence rates
The main theorem below bounds the L2 error of the algorithm at any given point in time. The main
difficulty is in the fact that the data is partitioned in a complicated way due to the ever-changing
bindwidth t : every ball around a center can eventually contain both points assigned to the center
and points not assigned to the center, in fact can contain other centers (see Figure 1). This makes
it hard to get a handle on the number of points assigned to a single center xt (contributing to the
variance of fn (xt )) and the distance between points assigned to the same center (contributing to the
bias). This is not the case in classical analyses of tree-based regression since the data partitioning is
usually clearly defined.
The problem is handled by first looking at the average error over points in xn , which is less difficult.
Theorem 1. Suppose the space (X , ?, ?) has metric-measure dimension d.
For any x ? X , define fn (x) = fn (xt ) where xt is the closest point to x in xn . Then at any time
t = n, we have for some C independent of n,
2/d
?2Y
d log n
2
2
2
+
.
E
kf
?
f
k
?
C(d
log
n)
sup
E
E
kf
(X)
?
f
(X)k
+
C?
n
n
n
2,?
xn ,Y n
Yn
n
n
xn
If the algorithm parameter C? ? C?? , then for some constant C 0 independent of n, we have at any
time n that
2
sup E n En |fn (X) ? f (X)| ? C 0 ?2Y + ?2 n?2/(2+d) .
xn
Y
? ?2/(2+d) ), nearly optimal in terms of the unknown d (up to a
The convergence rate is therefore O(n
log n factor). In the simulation of Figure 2(Left) we compare our procedure to tree-based regressors
with a fixed setting of d and of t = t?1/(2+d) . We use the classic rotating-Teapot dataset, where the
target output values are the cosine of the rotation angles. Our method attains the same performance
as the one with the right fixed setting of d.
As alluded to above, the proof of Theorem 1 proceeds by first bounding the average error
2
E n EY n |fn (X) ? f (X)| on the sample xn . Interestingly, the analysis of the average error is
4
Synthetic dataset, d=5, D=100, first 1000 samples d=1
Teapot dataset
1.1
1
Incremental Tree?based
fixed d=1
fixed d=4
fixed d=8
0.9
0.8
0.7
0.6
0.5
0.4
0.8
0.7
0.6
0.5
0.4
0.3
0.3
0.2
0
Incremental Tree?based
fixed d=1
fixed d=5
fixed d=10
1
Normalized RMSE on test set
Normalized RMSE on test set
0.9
0.2
1000
2000
3000
4000
# Training samples
5000
0.1
0
6000
0.5
1
1.5
2
2.5
# Training samples
3
3.5
4
4
x 10
Figure 2: Simulation results on Teapot (Left) and Synthetic (Right) datasets. C? set to 1, size of the
test sets 1800 and 12500, respectively.
of a worst-case nature where the data x1 , x2 , . . . is allowed to arrive adversarially (see analysis of
Section 4.1). This shows a sense in which the algorithm is robust to bad dimension estimates: the
average error is of the optimal form in terms of d, even though the data could trick us into picking a
bad guess di of d. Thus the insights behind the algorithm are perhaps of wider applicability to problems of a more adversarial nature. This is shown empirically in Figure 2(Right), where we created a
synthetic datasets with d = 5, while the first 1000 samples are from a line in X . An algorithm that
estimates the dimension in a first phase would end up using the suboptimal setting d = 1, while our
algorithm robustly updates its estimate over time.
As mentioned in the introduction, the same insights can be applied to other forms of regression in
a streaming setting. We show in the longer version of the paper a procedure more akin to kernel
regression, which employs other quantities (appropriate to the method) to control the bias-variance
tradeoff while deciding on keeping or rejecting the guess di .
3.3
Lower-bounds
We have to produce a distribution for which the problem is hard, and which matches our streaming
setting as well as possible. With this in mind, our lower-bound result differs from existing nonparametric lower-bounds by combining two important aspects. First, the lower-bound holds for any
given metric measure space (X , ?, ?) with finite measure-dimension: we constrain the worst-case
distribution to have the marginal ? that nature happens to choose. In contrast, lower-bounds in literature would commonly pick a suitable marginal on the space X [13, 14]. Second, the worst-case
distribution does not depend on the sample size as is common in literature. Instead, we show that
the rate of n?2/(2+d) holds for infinitely many n for a distribution fixed beforehand. This is more
appropriate for the online setting where the data is generated over time from a fixed distribution.
The lower-bound result of [9] also holds for a given measure space (X , ?, ?), but the worst-case distribution depends on sample size. A lower-bound of [7] holds for infinitely many n, but is restricted
to distributions on a Euclidean cube, and thus benefits from the regularity of the cube. Our result
combines some technical intuition from these two results in a way described in Section 4.3.
We need the following definition.
Definition 3. Given a metric-measure space (X , ?, ?), we let D?,? denote the set of distributions on
X, Y , X ? X , Y ? R, where the marginal on X is ?, and where the function f (x) = E[Y |X = x]
is ?-Lipschitz w.r.t. ?.
Theorem 2. Let (X , ?, ?) be a metric space of diameter 1, and metric-measure dimension d. For
any n ? N, define rn2 = (?2 n)?2/(2+d) . Pick any positive sequence {?n }n?N , ?n = o ?2 rn2 .
There exists an indexing subsequence {nt }t?N , nt+1 > nt , such that
2
inf sup lim
EX nt ,Y nt kfnt ? f k2,?
? nt
{fn } D?,? t??
= ?,
where the infimum is taken over all sequences {fn } of estimators fn : X n , Y n 7? L2,? .
5
By the statement of the theorem, if we pick any rate ?n faster than n?2/(2+d) , then there exists a
2
distribution with marginal ? for which E kfn ? f k /?n either diverges or tends to ?.
4
Analysis
We first analyze the average error of the algorithm over the data xn in Section 4.1. The proof of the
main theorem follows in Section 4.2.
4.1
Bounds on Average Error
We start by bounding the average error on the sample xn at time n, that is we upper-bound
2
E n EY n |fn (X) ? f (X)| .
The proof idea of the upper bound is the following. We bound the error in a given phase (Lemma 4),
then combine these errors over all phases to obtain the final bounds (Corollary 1). To bound the
error in a phase, we decompose the error in terms of squared bias and variance. The main technical
difficulty is that the bandwidth t varies over time and thus points at varying distances are included
in each estimate. Nevertheless, if ni is the number of steps in Phase i, we will see that both average
?2/(2+di )
squared bias and variance can be bounded by roughly ni
.
Finally, the algorithm ensures that the guess di is always an under-estimate of the unknown dimension d, as proven in Lemma 3 (proof in the supplemental appendix), so integrating over all phases
yields an adaptive bound on the average error. We assume throughout this section that the space
(X , ?) has dimension d for some C?? (see Def. 2).
Lemma 3. Suppose the algorithm parameter C? ? C?? . The following invariants hold throughout
the procedure for all phases i ? 1 of Algorithm 1:
? i ? di ? d.
? For any t ? Phase i we have |Xi | ? C? 4di t?di .
Lemma 4 (Bound on single phase). Suppose the algorithm parameter C? ? C?? . Consider Phase
i ? 1 and suppose this phase lasts ni steps. Let Eni denote expectation relative to the uniform
choice of X out of {xt : t ? Phase i}. We have the following bound:
2
2
? d ?2 + 12?2 n? 2+d .
E En |fn (X) ? f (X)| ? C4
Y
i
ni Y
Proof. Let Xi (X) denote the center closest to X in Xi . Suppose Xi (X) = xs , s ? [n], we let nxs
denote the number of points assigned to the center xs . We use the notation xt ? xs to say that xt is
assigned to center xs .
FirstPfix X ? {xt : t ? Phase i} and let xs = Xi (xt ). Define f?n (X) ? EY n fn (X) =
1
xt ?xs f (xt ). We proceed with the following standard bias-variance decomposition
nx s
2
2
2
En |fn (X) ? f (X)| = En fn (X) ? f?n (X) + f?n (X) ? f (X) .
(1)
Y
Y
Let X = xr , r ? s. We first bound the bias term. Using the Lipschitz property of f , and Jensen?s
inequality, we have
!2
2
1 X 2
1 X
?
2
?? (xr , xt )
?
? ? (xr , xt )
fn (X) ? f (X) ?
nxs x ?x
nxs x ?x
t
s
t
s
2 X
2?
2?2 X
2
2
?
? (xr , xs ) + ? (xs , xt ) ?
2r + 2t .
nxs x ?x
nxs x ?x
t
s
t
The variance term is easily bounded as follows:
2
X EY n |Yt ? f (xt )|2
?2Y
En fn (X) ? f?n (X) =
?
.
Y
n2xs
nxs
x ?x
t
s
6
s
Now take the expectation over X ? U {xt : t ? Phase i}. We have:
X
2
2
E n En |fn (X) ? f (X)| ? 1{X ? xs }
E En |fn (X) ? f (X)| =
ni Y
Y
xs ?Xi
1 X
?
ni
X
xs ?Xi xr ?xs
2?2 X
?2Y
+
nxs
nxs x ?x
t
!
2
2r + t
s
2?2 X 1 X
1 X 2
?Y +
=
ni
ni
nxs x ?x
xs ?Xi
xs ?Xi
?2 ? |Xi | 4?2 X
= Y
+
ni
ni
r
t?Phase i
ni
?
2
? 2+d
s
2r + 2t
xt ?xs
?2 ? |Xi | 4?2
2t = Y
+
ni
ni
X
xs ?Xi xt ?xs
To bound the last term, we have
Z
X
X ? 2
2t =
t 2+di ?
X
i
2
1? 2+d
d? ? 3ni
i
X
2t .
t?Phase i
.
0
ti ?[ni ]
Combine with the previous derivation and with both statements of Lemma 3 to get
? 2
2
? 2+d
?2 ? |Xi |
2
i
E En |fn (X) ? f (X)| ? Y
+ 12?2 ni
? C? 4d ?2Y + 12?2 ni 2+d .
ni Y
ni
Corollary 1 (Combined phases). Suppose the algorithm parameter C? ? C?? , then we have
2
2
E n En |fn (X) ? f (X)| ? 2 C? 4d ?2Y + 12?2 n? 2+d .
Y
Proof. Let I denote the number of phases up to time n. We will decompose the expectation E n in
terms of the various phases i ? [I] and apply Lemma 4. Let B , C? 4d ?2Y + 12?2 . We have:
2
E n En |fn (X) ? f (X)| ? B
I
X
ni
? 2
ni 2+d
I
d
I X 1 2+d
I
ni
=B
?B
n i=1 I
n
I
X
ni
d
! 2+d
n
I
i=1
d
2
2
2
2
I n 2+d
= B ? I 2+d n? 2+d ? B ? d 2+d n? 2+d ,
=B
n I
where in the second inequality we use Jensen?s inequality, and in the last inequality Lemma 3.
Y
4.2
i=1
Bound on L2 Error
We need the following lemma, whose proof is in the supplemental appendix, which bounds the
probability that a ?-ball of a given radius contains a sample from xn . This will then allow us to
bound the bias induced by transforming a solution for the adversarial setting to a solution for the
stochastic setting.
Lemma 5. Suppose (X , ?, ?) has metric measure dimension d. Let ? be a distribution on X and
let ?n denote an empirical distribution on an i.i.d sample xn from ?. For > 1/n, let B denote the
class of ?-balls centered on X of radius . There exists C depending on d such that the following
holds. Let 0 < ? < 1, Define ?n,? = C (d log n + log(1/?)). Then, with probability at least 1 ? ?,
for all B ? B satisfying ?(B) ? ?n,? /n we have ?n (B) > 1/n.
We are now ready to prove Theorem 1.
Proof of Theorem 1. Fix ? = 1/n and define ?n,? as in Lemma 5. Pick = (?n,? /C1 n)1/d ? 1/n,
where C1 is such that every B ? B has mass at least C1 d . Since for every B ? B , ?(B) ?
C1 ?d ? ?n,? /n, we have by Lemma 5, that with probability at least 1 ? ?, all B ? B contain a
point from xn . In other words, the event E that xn forms an -cover of X is (1 ? ?)-likely.
7
Suppose xt is the closest point in xn to x ? X . We write x ? xt . Then, under E, we have,
kf (x) ? f (xt )k ? ?. We therefore have by Fubini?s theorem
2
2
E kfn ? f k = E E E |fn (X) ? f (X)| ? (1{E} + 1 E? )
xn ,Y n
2,?
xn X Y n |xn
? En
x
? En
x
n
X
2
2?(x : x ? xt ) nE n |fn (xt ) ? f (xt )| + 2?2 2 + ??2Y
Y |x
t=1
n
X
t=1
2C2 d
2
E |fn (xt ) ? f (xt )| + 2?2 2 + ??2Y
Y n |xn
2C2 ?n,?
2
?
sup E n En |fn (xt ) ? f (xt )| + 2?2 2 + ??2Y ,
Y
C1
xn
where in the first inequality we break the integration over the Voronoi partition of X defined by the
points in xn , and introduce f (xt ); the second inequality uses {x : x ? xt } ? B(xt , ) under E.
4.3
Lower-bound
Let?s consider first the case of a fixed n. The idea behind the proof is as follows: for ? fixed, we
have to come up with a class F of functions which vary considerably on the space X . To this end we
discretize X into as many cells as possible, and let any f ? F potentially change sign from one cell
to the other. The larger the dimension d the more we can discretize the space and the more complex
F, subject to a Lipschitz constraint. The problem of picking the right f can thus be reduced to that
of classification, since the learner has to discover the sign of f on sufficiently many cells.
In order to handle many data sizes n simultaneously, we borrow from the idea above.
Say we want to show that the lower-bound holds for a subsequence {ni } simultaneously.
Then we reserve a subset of the space X for each n1 , n2 , . . ., and discretize each subset according to ni .
The functions in F have to then vary sufficiently in each subset of the space X according to the corresponding ni . This is illustrated in Figure 3.
We can then apply the same idea of reduction to classification for each nt separately. This sort of idea appears
in [7] where ? is uniform on the Euclidean cube, where
they use the regularity of the cube to set up the right sequence of discretizations over subsets of the cube. The
main technicality in our result is that we work with a general space without much regularity. The lack of regularity
makes it unclear a priori how to divide such a space into
Figure 3: Lower bound proof idea.
subsets of the proper size for each ni .
Last, we have to ensure that the functions f ? F resulting
from our discretization of a general metric space X are in fact Lipschitz. For this, we extend some
of the ideas from [9] which handles the case of a fixed n. For lack of space, the complete proof is in
the extended version of the paper.
5
Conclusions
We presented an efficient and nearly minimax optimal approach to nonparametric regression in a
streaming setting. The streaming setting is gaining more attention as modern data sizes are getting
larger, and as data is being acquired online in many applications.
The main insights behind the approach presented extend to other nonparametric methods, and are
likely to extend to settings of a more adversrial nature. We left open the question of optimal adaptation to the smoothness of the unknown function, while we effciently solve the equally or more
important question of adapting to the the unknown dimension of the data, which generally has a
stronger effect on the convergence rate.
8
References
[1] Y. Ben-Haim and E. Tom-Tov. A streaming parallel decision tree algorithm. Journal of Machine Learning Research, 11:849?872, 2010.
[2] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbors. ICML, 2006.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University
Press, New York, NY, USA, 2006.
[4] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor
Methods for Learning and Vision: Theory and Practice, 2005.
[5] P. Domingos and G. Hulten. Mining high-speed data streams. In Proceedings of the 6th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 71?80,
2000.
[6] H. Gu and J. Lafferty. Sequential nonparametric regression. ICML, 2012.
[7] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric
Regression. Springer, New York, NY, 2002.
[8] A. Kalai and S. Vempala. Efficient algorithms for universal portfolios. Journal of Machine
Learning Research, 3:423?440, 2002.
[9] S. Kpotufe. k-NN Regression Adapts to Local Intrinsic Dimension. NIPS, 2011.
[10] R. Krauthgamer and J. R. Lee. Navigating nets: simple algorithms for proximity search. In
Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ?04,
pages 798?807, Philadelphia, PA, USA, 2004. Society for Industrial and Applied Mathematics.
[11] B. Pfahringer, G. Holmes, and R. Kirkby. Handling numeric attributes in hoeffding trees. In
Advances in Knowledge Discovery and Data Mining: Proceedings of the 12th Pacific-Asia
Conference (PAKDD), volume 5012, pages 296?307. Springer, 2008.
[12] S. Schaal and C. Atkeson. Robot Juggling: An Implementation of Memory-based Learning.
Control Systems Magazine, IEEE, 1994.
[13] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348?
1360, 1980.
[14] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist.,
10:1340?1353, 1982.
[15] M. A. Taddy, R. B. Gramacy, and N. G. Polson. Dynamic trees for learning and design. Journal
of the American Statistical Association, 106(493), 2011.
[16] S. Vijayakumar and S. Schaal. Locally weighted projection regression: An O(n) algorithm for
incremental real time learning in high dimensional space. In in Proceedings of the Seventeenth
International Conference on Machine Learning (ICML), pages 1079?1086, 2000.
9
| 4898 |@word version:3 polynomial:1 interleave:1 stronger:1 open:1 simulation:2 decomposition:1 pick:4 reduction:1 initial:2 contains:1 interestingly:1 existing:1 current:1 com:1 nt:7 discretization:1 beygelzimer:1 fn:34 chicago:2 partition:15 subsequent:1 update:8 leaf:1 guess:8 c2:2 symposium:1 prove:5 consists:3 combine:3 dimen:1 introduce:1 x0:3 acquired:3 roughly:2 growing:1 discover:1 bounded:4 notation:1 mass:2 supplemental:2 finding:2 every:6 ti:7 finance:1 exactly:1 returning:1 k2:2 control:5 partitioning:3 yn:1 planck:1 positive:1 local:2 treat:1 tends:1 despite:1 lugosi:1 might:1 gyorfi:1 seventeenth:1 practice:2 differs:1 xr:6 procedure:8 universal:1 empirical:1 discretizations:1 adapting:2 projection:1 word:1 integrating:1 get:2 risk:1 yt:5 center:18 attention:2 gramacy:1 estimator:3 holmes:1 insight:3 borrow:1 classic:1 handle:3 notion:6 searching:1 hurt:2 target:2 play:1 suppose:9 magazine:1 exact:1 taddy:1 us:1 domingo:1 trick:1 pa:1 expensive:2 satisfying:1 updating:1 ft:1 worst:6 region:4 ensures:2 technological:2 mentioned:1 intuition:1 transforming:1 complexity:2 dynamic:1 depend:4 learner:2 gu:1 easily:1 various:3 derivation:1 fast:4 query:3 whose:1 heuristic:1 larger:2 solve:1 say:2 final:1 online:5 sequence:3 net:1 adaptation:1 combining:1 adapts:1 getting:1 convergence:5 regularity:4 diverges:1 produce:1 incremental:5 ben:1 wider:1 depending:3 derive:1 nearest:7 minor:1 received:1 ex:1 implies:1 come:1 radius:2 closely:1 attribute:1 stochastic:1 centered:2 require:1 assign:1 fix:1 generalization:1 preliminary:1 decompose:2 hold:7 proximity:1 around:2 sufficiently:2 deciding:1 reserve:1 vary:2 estimation:2 weighted:1 clearly:1 always:1 rather:1 kalai:1 shelf:2 sion:1 varying:1 hulten:1 corollary:2 focus:1 schaal:2 contrast:1 sigkdd:1 adversarial:2 attains:1 detect:1 sense:1 industrial:1 voronoi:1 streaming:10 nn:2 pfahringer:1 relation:1 interested:2 germany:1 classification:2 denoted:1 priori:1 integration:1 initialize:1 marginal:4 cube:5 once:1 never:1 having:1 teapot:3 adversarially:1 icml:3 nearly:4 future:1 intelligent:1 employ:3 modern:4 simultaneously:2 cheaper:1 phase:29 maintain:2 n1:1 mining:3 analyzed:1 arrives:4 mixture:1 behind:3 ambient:1 beforehand:1 traversing:1 tree:13 euclidean:6 divide:1 rotating:1 desired:1 walk:1 theoretical:2 cover:4 applicability:1 subset:6 uniform:2 too:2 varies:2 synthetic:3 combined:1 adaptively:1 considerably:1 density:2 international:2 siam:1 vijayakumar:1 lee:1 off:2 picking:2 regressor:2 squared:2 cesa:1 choose:1 hoeffding:1 resort:1 expert:1 american:1 return:1 rn2:2 depends:1 stream:1 view:1 break:1 analyze:1 sup:4 start:4 sort:1 maintains:2 complicated:1 parallel:1 rmse:2 ni:27 variance:11 efficiently:1 yield:2 rejecting:1 produced:1 monitoring:2 fo:1 whenever:1 definition:5 kfn:3 proof:12 di:21 stop:1 sampled:1 dataset:3 recall:1 lim:1 knowledge:2 actually:1 appears:1 fubini:1 tom:1 asia:1 done:4 though:1 just:1 langford:1 lack:2 mode:1 infimum:1 perhaps:1 usa:2 effect:1 contain:5 y2:1 normalized:2 hence:1 assigned:10 entering:1 illustrated:1 game:1 maintained:1 cosine:1 stone:2 complete:1 common:3 rotation:1 empirically:1 overview:1 volume:1 discussed:1 extend:3 association:1 cambridge:1 smoothness:1 tuning:4 mathematics:1 tov:1 portfolio:1 robot:1 longer:2 similarity:1 add:1 closest:7 belongs:3 inf:1 discard:1 inequality:6 affiliation:1 yi:2 ey:4 sound:1 technical:4 match:1 faster:1 cross:1 long:1 equally:2 prediction:2 regression:29 vision:1 metric:30 expectation:3 fifteenth:1 kernel:2 robotics:1 cell:5 c1:5 receive:2 want:1 separately:1 else:1 crucial:1 induced:1 subject:1 lafferty:1 call:1 near:1 concerned:1 bandwidth:2 suboptimal:2 identified:1 idea:9 tradeoff:3 handled:1 krzyzak:1 akin:1 clarkson:1 juggling:1 proceed:1 york:2 generally:3 tune:1 nonparametric:6 locally:1 statist:2 diameter:4 reduced:1 exist:2 sign:2 estimated:3 track:1 write:1 discrete:1 nevertheless:1 achieving:1 monitor:1 falling:1 changing:1 kept:1 angle:1 everywhere:1 soda:1 extends:1 throughout:3 reasonable:1 arrive:1 decision:2 appendix:2 bound:31 def:1 haim:1 annual:1 constraint:2 precisely:1 constrain:1 pakdd:1 x2:2 aspect:1 speed:1 vempala:1 pacific:1 according:2 precompute:1 ball:5 partitioned:1 kakade:1 happens:1 restricted:1 indexing:1 invariant:1 taken:2 alluded:1 precomputed:3 count:1 eventually:2 needed:1 mind:1 end:5 apply:2 appropriate:5 generic:1 robustly:1 batch:1 alternative:1 ensure:2 krauthgamer:1 maintaining:7 classical:1 society:1 question:2 quantity:3 parametric:2 traditional:3 unclear:2 navigating:1 distance:2 nx:1 manifold:1 assuming:1 setup:2 difficult:1 statement:2 potentially:1 stated:1 polson:1 implementation:1 design:1 proper:1 unknown:15 kpotufe:2 contributed:1 upper:4 discretize:3 bianchi:1 francesco:2 hurtful:1 discarded:1 datasets:3 finite:2 eni:1 extended:1 ever:1 looking:1 y1:1 ttic:1 pair:1 c4:1 nip:1 proceeds:1 usually:2 below:2 firstname:1 confidently:1 max:1 gaining:1 memory:1 suitable:1 event:1 difficulty:2 minimax:5 ne:1 started:1 created:1 ready:1 philadelphia:1 literature:3 l2:6 discovery:2 kf:3 contributing:2 relative:4 interesting:1 proven:1 validation:1 last:4 keeping:2 arriving:1 free:1 offline:1 bias:11 allow:1 institute:3 neighbor:7 benefit:1 dimension:38 xn:27 world:1 numeric:1 commonly:1 adaptive:1 regressors:1 atkeson:1 employing:1 excess:1 approximate:3 compact:1 technicality:1 global:1 sequentially:6 nxs:9 assumed:2 knew:1 xi:22 subsequence:2 search:6 sk:1 nature:4 robust:1 complex:2 main:10 bounding:2 noise:1 n2:1 allowed:1 x1:2 en:14 ny:2 samory:1 binwidth:4 toyota:2 theorem:9 bad:2 xt:48 emphasized:2 xic:1 jensen:2 x:28 exists:5 intrinsic:1 sequential:1 simply:2 likely:4 infinitely:2 springer:2 corresponds:1 satisfies:1 relies:2 constantly:1 acm:2 ann:2 orabona:2 lipschitz:7 hard:2 change:1 included:1 lemma:16 treebased:1 kohler:1 handling:1 |
4,307 | 4,899 | Graphical Models for Inference with Missing Data
Karthika Mohan
Judea Pearl
Dept. of Computer Science
Dept. of Computer Science
Univ. of California, Los Angeles Univ. of California, Los Angeles
Los Angeles, CA 90095
Los Angeles, CA 90095
[email protected]
[email protected]
Jin Tian
Dept. of Computer Science
Iowa State University
Ames, IA 50011
[email protected]
Abstract
We address the problem of recoverability i.e. deciding whether there exists a consistent estimator of a given relation Q, when data are missing not at random. We
employ a formal representation called ?Missingness Graphs? to explicitly portray
the causal mechanisms responsible for missingness and to encode dependencies
between these mechanisms and the variables being measured. Using this representation, we derive conditions that the graph should satisfy to ensure recoverability
and devise algorithms to detect the presence of these conditions in the graph.
1
Introduction
The ?missing data? problem arises when values for one or more variables are missing from recorded
observations. The extent of the problem is evidenced from the vast literature on missing data in such
diverse fields as social science, epidemiology, statistics, biology and computer science. Missing
data could be caused by varied factors such as high cost involved in measuring variables, failure of
sensors, reluctance of respondents in answering certain questions or an ill-designed questionnaire.
Missing data also plays a major role in survival data analysis and has been treated primarily using
Kaplan-Meier estimation [30].
In machine learning, a typical example is the Recommender System [16] that automatically generates a list of products that are of potential interest to a given user from an incomplete dataset of
user ratings. Online portals such as Amazon and eBay employ such systems. Other areas such as
data mining [7], knowledge discovery [18] and network tomography [2] are also plagued by missing data problems. Missing data can have several harmful consequences [23, 26]. Firstly they can
significantly bias the outcome of research studies. This is mainly because the response profiles of
non-respondents and respondents can be significantly different from each other. Hence ignoring
the former distorts the true proportion in the population. Secondly, performing the analysis using
only complete cases and ignoring the cases with missing values can reduce the sample size thereby
substantially reducing estimation efficiency. Finally, many of the algorithms and statistical techniques are generally tailored to draw inferences from complete datasets. It may be difficult or even
inappropriate to apply these algorithms and statistical techniques on incomplete datasets.
1.1
Existing Methods for Handling Missing Data
There are several methods for handling missing data, described in a rich literature of books, articles
and software packages, which are briefly summarized here1 . Of these, listwise deletion and pairwise
deletion are used in approximately 96% of studies in the social and behavioral sciences [24].
Listwise deletion refers to a simple method in which cases with missing values are deleted [3]. Unless data are missing completely at random, listwise deletion can bias the outcome [31]. Pairwise
1
For detailed discussions we direct the reader to the books- [1, 6, 13, 17].
1
deletion (or ?available case?) is a deletion method used for estimating pairwise relations among variables. For example, to compute the covariance of variables X and Y , all those cases or observations
in which both X and Y are observed are used, regardless of whether other variables in the dataset
have missing values.
The expectation-maximization (EM) algorithm is a general technique for finding maximum likelihood (ML) estimates from incomplete data. It has been proven that likelihood-based inference
while ignoring the missing data mechanism, leads to unbiased estimates under the assumption of
missing at random (MAR) [13]. Most work in machine learning assumes MAR and proceeds with
ML or Bayesian inference. Exceptions are recent works on collaborative filtering and recommender
systems which develop probabilistic models that explicitly incorporate missing data mechanism
[16, 14, 15]. ML is often used in conjunction with imputation methods, which in layman terms,
substitutes a reasonable guess for each missing value [1]. A simple example is Mean Substitution, in
which all missing observations of variable X are substituted with the mean of all observed values of
X. Hot-deck imputation, cold-deck imputation [17] and Multiple Imputation [26, 27] are examples
of popular imputation procedures. Although these techniques work well in practice, performance
guarantees (eg: convergence and unbiasedness) are based primarily on simulation experiments.
Missing data discussed so far is a special case of coarse data, namely data that contains observations
made in the power set rather than the sample space of variables of interest [12]. The notion of coarsening at random (CAR) was introduced in [12] and identifies the condition under which coarsening
mechanism can be ignored while drawing inferences on the distribution of variables of interest [10].
The notion of sequential CAR has been discussed in [9]. For a detailed discussion on coarsened data
refer to [30].
Missing data literature leaves many unanswered questions with regard to theoretical guarantees for
the resulting estimates, the nature of the assumptions that must be made prior to employing various
procedures and whether the assumptions are testable. For a gentle introduction to the missing data
problem and the issue of testability refer to [22, 19]. This paper aims to illuminate missing data
problems using causal graphs [See Appendix 5.2 for justification]. The questions we pose are:
Given a target relation Q to be estimated and a set of assumptions about the missingness process
encoded in a graphical model, under what conditions does a consistent estimate exist and how can
we elicit it from the data available?
We answer these questions with the aid of Missingness Graphs (m-graphs in short) to be described
in Section 2. Furthermore, we review the traditional taxonomy of missing data problems and cast it
in graphical terms. In Section 3 we define the notion of recoverability - the existence of a consistent
estimate - and present graphical conditions for detecting recoverability of a given probabilistic query
Q. Conclusions are drawn in Section 4.
2
Graphical Representation of the Missingness Process
2.1
Missingness Graphs
X
Y
Ry
X
Rx
Ry
Y*
(a)
X
Y
Y
Ry
Y*
Rx
Y*
X*
(b)
X
(c)
Y
Ry
Y*
X*
(d)
Figure 1: m-graphs for data that are: (a) MCAR, (b) MAR, (c) & (d) MNAR; Hollow and solid
circles denote partially and fully observed variables respectively.
Graphical models such as DAGs (Directed Acyclic Graphs) can be used for encoding as well as
portraying conditional independencies and causal relations, and the graphical criterion called dseparation (refer Appendix-5.1, Definition-3) can be used to read them off the graph [21, 20]. Graphical Models have been used to analyze missing information in the form of missing cases (due to
sample selection bias)[4]. Using causal graphs, [8]- analyzes missingness due to attrition (partially
2
observed outcome) and [29]- cautions against the indiscriminate use of auxiliary variables. In both
papers missing values are associated with one variable and interactions among several missingness
mechanisms remain unexplored.
The need exists for a general approach capable of modeling an arbitrary data-generating process and
deciding whether (and how) missingness can be outmaneuvered in every dataset generated by that
process. Such a general approach should allow each variable to be governed by its own missingness
mechanism, and each mechanism to be triggered by other (potentially) partially observed variables
in the model. To achieve this flexibility we use a graphical model called ?missingness graph? (mgraph, for short) which is a DAG (Directed Acyclic Graph) defined as follows.
Let G(V, E) be the causal DAG where V = V ? U ? V ? ? R. V is the set of observable nodes.
Nodes in the graph correspond to variables in the data set. U is the set of unobserved nodes (also
called latent variables). E is the set of edges in the DAG. Oftentimes we use bi-directed edges as
a shorthand notation to denote the existence of a U variable as common parent of two variables in
Vo ? Vm ? R. V is partitioned into Vo and Vm such that Vo ? V is the set of variables that are
observed in all records in the population and Vm ? V is the set of variables that are missing in
at least one record. Variable X is termed as fully observed if X ? Vo and partially observed if
X ? Vm .
Associated with every partially observed variable Vi ? Vm are two other variables Rvi and Vi? ,
where Vi? is a proxy variable that is actually observed, and Rvi represents the status of the causal
mechanism responsible for the missingness of Vi? ; formally,
vi
if rvi = 0
?
vi = f (rvi , vi ) =
(1)
m
if rvi = 1
Contrary to conventional use, Rvi is not treated merely as the missingness indicator but as a driver
(or a switch) that enforces equality between Vi and Vi? . V ? is a set of all proxy variables and
R is the set of all causal mechanisms that are responsible for missingness. R variables may not
be parents of variables in V ? U . This graphical representation succinctly depicts both the causal
relationships among variables in V and the process that accounts for missingness in some of the
variables. We call this graphical representation Missingness Graph or m-graph for short. Since
every d-separation in the graph implies conditional independence in the distribution [21], the mgraph provides an effective way of representing the statistical properties of the missingness process
and, hence, the potential of recovering the statistics of variables in Vm from partially missing data.
2.2
Taxonomy of Missingness Mechanisms
It is common to classify missing data mechanisms into three types [25, 13]:
Missing Completely At Random (MCAR) : Data are MCAR if the probability that Vm is missing
is independent of Vm or any other variable in the study, as would be the case when respondents
decide to reveal their income levels based on coin-flips.
Missing At Random (MAR) : Data are MAR if for all data cases Y , P (R|Yobs , Ymis ) = P (R|Yobs )
where Yobs denotes the observed component of Y and Ymis , the missing component. Example:
Women in the population are more likely to not reveal their age.
Missing Not At Random (MNAR) or ?non-ignorable missing?: Data that are neither MAR nor
MCAR are termed as MNAR. Example: Online shoppers rate an item with a high probability either
if they love the item or if they loathe it. In other words, the probability that a shopper supplies a
rating is dependent on the shopper?s underlying liking [16].
Because it invokes specific values of the observed and unobserved variables, (i.e., Yobs and Ymis ),
many authors find Rubin?s definition difficult to apply in practice and prefer to work with definitions
expressed in terms of independencies among variables (see [28, 11, 6, 17]). In the graph-based
interpretation used in this paper, MCAR is defined as total independence between R and Vo ?Vm ?U
i.e. R?
?(Vo ? Vm ? U ), as depicted in Figure 1(a). MAR is defined as independence between R and
Vm ?U given Vo i.e. R?
?Vm ?U |Vo , as depicted in Figure 1(b). Finally if neither of these conditions
hold, data are termed MNAR, as depicted in Figure 1(c) and (d). This graph-based interpretation uses
slightly stronger assumptions than Rubin?s, with the advantage that the user can comprehend, encode
and communicate the assumptions that determine the classification of the problem. Additionally, the
conditional independencies that define each class are represented explicitly as separation conditions
3
in the corresponding m-graphs. We will use this taxonomy in the rest of the paper, and will label
data MCAR, MAR and MNAR according to whether the defining conditions, R??Vo ? Vm ? U (for
MCAR), R?
?Vm ? U |Vo (for MAR) are satisfied in the corresponding m-graphs.
3
Recoverability
In this section we will examine the conditions under which a bias-free estimate of a given probabilistic relation Q can be computed. We shall begin by defining the notion of recoverability.
Definition 1 (Recoverability). Given a m-graph G, and a target relation Q defined on the variables
in V , Q is said to be recoverable in G if there exists an algorithm that produces a consistent estimate
of Q for every dataset D such that P (D) is (1) compatible with G and (2) strictly positive over
complete cases i.e. P (Vo , Vm , R = 0) > 0.2
Here we assume that the observed distribution over complete cases P (Vo , Vm , R = 0) is strictly
positive, thereby rendering recoverability a property that can be ascertained exclusively from the
m-graph.
Corollary 1. A relation Q is recoverable in G if and only if Q can be expressed in terms of the
probability P (O) where O = {R, V ? , Vo } is the set of observable variables in G. In other words,
for any two models M1 and M2 inducing distributions P M1 and P M2 respectively, if P M1 (O) =
P M2 (O) > 0 then QM1 = QM2 .
Proof: (sketch) The corollary merely rephrases the requirement of obtaining a consistent estimate to
that of expressibility in terms of observables.
Practically, what recoverability means is that if the data D are generated by any process compatible
?
with G, a procedure exists that computes an estimator Q(D)
such that, in the limit of large samples,
?
Q(D) converges to Q. Such a procedure is called a ?consistent estimator.? Thus, recoverability is
the sole property of G and Q, not of the data available, or of any routine chosen to analyze or process
the data.
Recoverability when data are MCAR For MCAR data we have R??(Vo ? Vm ). Therefore, we
can write P (V ) = P (V |R) = P (Vo , V ? |R = 0). Since both R and V ? are observables, the joint
probability P (V ) is consistently estimable (hence recoverable) by considering complete cases only
(listwise deletion), as shown in the following example.
Example 1. Let X be the treatment and Y be the outcome as depicted in the m-graph in Fig. 1
(a). Let it be the case that we accidentally deleted the values of Y for a handful of samples, hence
Y ? Vm . Can we recover P (X, Y )?
From D, we can compute P (X, Y ? , Ry ). From the m-graph G, we know that Y ? is a collider and
hence by d-separation, (X ? Y )?
?Ry . Thus P (X, Y ) = P (X, Y |Ry ). In particular, P (X, Y ) =
P (X, Y |Ry = 0). When Ry = 0, by eq. (1), Y ? = Y . Hence,
P (X, Y ) = P (X, Y ? |Ry = 0)
(2)
The RHS of Eq. 2 is consistently estimable from D; hence P (X, Y ) is recoverable.
Recoverability when data are MAR When data are MAR, we have R??Vm |Vo . Therefore
P (V ) = P (Vm |Vo )P (Vo ) = P (Vm |Vo , R = 0)P (Vo ). Hence the joint distribution P (V ) is recoverable.
Example 2. Let X be the treatment and Y be the outcome as depicted in the m-graph in Fig. 1 (b).
Let it be the case that some patients who underwent treatment are not likely to report the outcome,
hence the arrow X ? Ry . Under the circumstances, can we recover P (X, Y )?
From D, we can compute P (X, Y ? , Ry ). From the m-graph G, we see that Y ? is a collider and X is
a fork. Hence by d-separation, Y ?
?Ry |X. Thus P (X, Y ) = P (Y |X)P (X) = P (Y |X, Ry )P (X).
2
In many applications such as truncation by death, the problem forbids certain combinations of events
from occurring, in which case the definition need be modified to accommodate such constraints as shown in
Appendix-5.3. Though this modification complicates the definition of ?recoverability?, it does not change the
basic results derived in this paper.
4
In particular, P (X, Y ) = P (Y |X, Ry = 0)P (X). When Ry = 0, by eq. (1), Y ? = Y . Hence,
P (X, Y ) = P (Y ? |X, Ry = 0)P (X)
(3)
and since X is fully observable, P (X, Y ) is recoverable.
Note that eq. (2) permits P (X, Y ) to be recovered by listwise deletion, while eq. (3) does not; it
requires that P (X) be estimated first over all samples, including those in which Y is missing. In
this paper we focus on recoverability under large sample assumption and will not be dealing with
the shrinking sample size issue.
Recoverability when data are MNAR Data that are neither MAR nor MCAR are termed MNAR.
Though it is generally believed that relations in MNAR datasets are not recoverable, the following
example demonstrates otherwise.
Example 3. Fig. 1 (d) depicts a study where (i) some units who underwent treatment (X = 1) did
not report the outcome (Y ) and (ii) we accidentally deleted the values of treatment for a handful
of cases. Thus we have missing values for both X and Y which renders the dataset MNAR. We
shall show that P (X, Y ) is recoverable. From D, we can compute P (X ? , Y ? , Rx , Ry ). From the
m-graph G, we see that X?
?Rx and Y ??(Rx ? Ry )|X. Thus P (X, Y ) = P (Y |X)P (X) =
P (Y |X, Ry = 0, Rx = 0)P (X|Rx = 0). When Ry = 0 and Rx = 0 we have (by Equation (1) ),
Y ? = Y and X ? = X. Hence,
P (X, Y ) = P (Y ? |X ? , Rx = 0, Ry = 0)P (X ? |Rx = 0)
(4)
Therefore, P (X, Y ) is recoverable.
The estimand in eq. (4) also dictates how P (X, Y ) should be estimated from the dataset. In the first
step, we delete all cases in which X is missing and create a new data set D0 from which we estimate
P (X). Dataset D0 is further pruned to form dataset D00 by removing all cases in which Y is missing.
P (Y |X) is then computed from D00 . Note that order matters; had we deleted cases in the reverse
order, Y and then X, the resulting estimate would be biased because the d-separations needed for
establishing the validity of the estimand: P (X|Y )P (Y ), are not supported by G. We will call this
sequence of deletions as deletion order.
Several features are worth noting regarding this graph-based taxonomy of missingness mechanisms.
First, although MCAR and MAR can be verified by inspecting the m-graph, they cannot, in general
be verified from the data alone. Second, the assumption of MCAR allows an estimation procedure
that amounts (asymptotically) to listwise deletion, while MAR dictates a procedure that amounts
to listwise deletion in every stratum of Vo . Applying MAR procedure to MCAR problem is safe,
because all conditional independencies required for recoverability under the MAR assumption also
hold in an MCAR problem, i.e. R?
?(Vo , Vm ) ? R??Vm |Vo . The converse, however, does not
hold, as can be seen in Fig. 1 (b). Applying listwise deletion is likely to result in bias, because the
necessary condition R?
?(Vo , Vm ) is violated in the graph. An interesting property which evolves
from this discussion is that recoverability of certain relations does not require RVi ??Vi |Vo ; a subset
of Vo would suffice as shown below.
Property 1. P (Vi ) is recoverable if ?W ? Vo such that RVi ??V |W .
P
Proof: P (Vi ) may be decomposed as: P (Vi ) = w P (Vi? |Rvi = 0, W )P (W ) since Vi ??RVi |W
and W ? Vo . Hence P (Vi ) is recoverable.
It is important to note that the recoverability of P (X, Y ) in Fig. 1(d) was feasible despite the fact
that the missingness model would not be considered Rubin?s MAR (as defined in [25]). In fact, an
overwhelming majority of the data generated by each one of our MNAR examples would be outside
Rubin?s MAR. For a brief discussion on these lines, refer to Appendix- 5.4.
Our next question is: how can we determine if a given relation is recoverable? The following
theorem provides a sufficient condition for recoverability.
3.1
Conditions for Recoverability
Theorem 1. A query Q defined over variables in Vo ? Vm is recoverable if it is decomposable into
terms of the form Qj = P (Sj |Tj ) such that Tj contains the missingness mechanism Rv = 0 of
every partially observed variable V that appears in Qj .
5
Proof: If such a decomposition exists, every Qj is estimable from the data, hence the entire expression for Q is recoverable.
Example 4. Equation (4) demonstrates a decomposition of Q = P (X, Y ) into a product of two
terms Q1 = P (Y |X, Rx = 0, Ry = 0) and Q2 = P (X|Rx = 0) that satisfy the condition of
Theorem 1. Hence Q is recoverable.
Example 5. Consider the problem of recovering Q = P (X, Y ) from the m-graph of Fig. 3(b).
Attempts to decompose Q by the chain rule, as was done in Eqs. (3) and (4) would not satisfy the
conditions of Theorem 1. To witness we write P (X, Y ) = P (Y |X)P (X) and note that the graph
does not permit us to augment any of the two terms with the necessary Rx or Ry terms; X is
independent of Rx only if we condition on Y , which is partially observed, and Y is independent of
Ry only if we condition on X which is also partially observed. This deadlock can be disentangled
however using a non-conventional decomposition:
P (Rx , Ry |X, Y )
Q = P (X, Y ) = P (X, Y )
P (Rx , Ry |X, Y )
P (Rx , Ry )P (X, Y |Rx , Ry )
=
(5)
P (Rx |Y, Ry )P (Ry |X, Rx )
where the denominator was obtained using the independencies Rx ??(X, Ry )|Y and
Ry ?
?(Y, Rx )|X shown in the graph. The final expression above satisfies Theorem 1 and
renders P (X, Y ) recoverable. This example again shows that recovery is feasible even when data
are MNAR.
Theorem 2 operationalizes the decomposability requirement of Theorem 1.
Theorem 2 (Recoverability of the Joint P (V )). Given a m-graph G with no edges between the R
variables and no latent variables as parents of R variables, a necessary and sufficient condition
for recovering the joint distribution P (V ) is that no variable X be a parent of its missingness
mechanism RX . Moreover, when recoverable, P (V ) is given by
P (R = 0, v)
,
(6)
P (v) = Q
o
m
m
i P (Ri = 0|pari , pari , RP ar = 0)
i
where P aori ? Vo and P am
ri ? Vm are the parents of Ri .
Proof. (sufficiency) The observed joint distribution may be decomposed according to G as
X
P (R = 0, v) =
P (v, u)P (R = 0|v, u)
u
= P (v)
Y
P (Ri = 0|paori , pam
ri ),
(7)
i
where we have used the facts that there are no edges between the R variables, and that there are no
latent variables as parents of R variables. If Vi is not a parent of Ri (i.e. Vi 6? P am
ri ), then we have
Ri ?
?RP am
|(P aori ? P am
ri ). Therefore,
r
i
o
m
P (Ri = 0|paori , pam
= 0).
ri ) = P (Ri = 0|pari , pari , RP am
r
i
(8)
Given strictly positive P (R = 0, Vm , Vo ), we have that all probabilities P (Ri =
0|paori , pam
= 0) are strictly positive. Using Equations (7) and (8) , we conclude that
ri , RP am
ri
P (V ) is recoverable as given by Eq. (6).
(necessity) If X is a parent of its missingness mechanism RX , then P (X) is not recoverable based
on Lemmas 3 and 4 in Appendix 5.5. Therefore the joint P (V ) is not recoverable.
The following theorem gives a sufficient condition for recovering the joint distribution in a Markovian model.
Theorem 3. Given a m-graph with no latent variables (i.e., Markovian) the joint distribution P (V )
is recoverable if no missingness mechanism RX is a descendant of its corresponding variable X.
Moreover, if recoverable, then P (V ) is given by
Y
Y
P (v) =
P (vi |paoi , pam
=
0)
P (vj |paoj , pam
= 0), (9)
i , RP am
j , RVj = 0, RP am
i
j
i,Vi ?Vo
j,Vj ?Vm
where P aoi ? Vo and P am
i ? Vm are the parents of Vi .
6
Proof: Refer Appendix-5.6
Definition 2 (Ordered factorization). An ordered factorization over a set O of ordered V variables Y1 < Y2 < . . . < Yk , denoted by f (O), is a product of conditional probabilities f (O) =
Q
?({Yi+1 , . . . , Yn }\Xi )|Xi .
i P (Yi |Xi ) where Xi ? {Yi+1 , . . . , Yn } is a minimal set such that Yi ?
Theorem 4. A sufficient condition for recoverability of a relation Q is that Q be decomposable into
an ordered factorization, or a sum of such factorizations, such that every factor Qi = P (Yi |Xi )
satisfies Yi ?
?(Ryi , Rxi )|Xi . A factorization that satisfies this condition will be called admissible.
X
RX
RY
Rx
(a)
4
X3
X2
X1
Y
Rx
X4
Rx
3
2
Z
RY
X
RZ
(c)
(b)
Y
RX
X
RX
Y
RY
Z
RZ
(d)
Figure 2: Graph in which (a) only P (X|Y ) is recoverable (b) P (X4 ) is recoverable only when
conditioned on X1 as shown in Example 6 (c) P (X, Y, Z) is recoverable (d) P (X, Z) is recoverable.
Proof. follows from Theorem-1 noting that ordered factorization is one specific form of decomposition.
Theorem 4 will allow us to confirm recoverability of certain queries Q in models such as those in
Fig. 2(a), (b) and (d), which do not satisfy the requirement in Theorem 2. For example, by applying
Theorem 4 we can conclude that, (1) in Figure 2 (a), P (X|Y ) = P (X|Rx = 0, Ry = 0, Y ) is
recoverable, (2) in Figure 2 (c), P (X, Y, Z) = P (Z|X, Y, Rz = 0, Rx = 0, Ry = 0)P (X|Y, Rx =
0, Ry = 0)P (Y |Ry = 0) is recoverable and (3) in Figure 2 (d), P (X, Z) = P (X, Z|Rx = 0, Rz =
0) is recoverable.
Note that the condition of Theorem 4 differs from that of Theorem 1 in two ways. Firstly, the
decomposition is limited to ordered factorizations i.e. Yi is a singleton and Xi a set. Secondly, both
Yi and Xi are taken from Vo ? Vm , thus excluding R variables.
Example 6. Consider the query Q = P (X4 ) in Fig. 2(b). Q can be decomposed in a variety of
ways, among them
P being the factorizations:
(a) P (X4 ) = Px3 P (X4 |X3 )P (X3 ) for the order X4 , X3
(b) P (X4 ) = x2 P (X4 |X2 )P (X2 ) for the order X4 , X2
P
(c) P (X4 ) = x1 P (X4 |X1 )P (X1 ) for the order X4 , X1
Although each of X1 , X2 and X3 d-separate X4 from RP
X4 , only (c) is admissible since each factor
satisfies Theorem 4. Specifically, (c) can be written as x1 P (X4? |X1 , RX4 = 0)P (X1 ) and can
be estimated by the deletion schedule (X1 , X4 ), i.e., in each stratum of X1 , we delete samples for
which RX4 = 1 and compute P (X4? , Rx4 = 0, X1 ). In (a) and (b) however, Theorem-4 is not
satisfied since the graph does not permit us to rewrite P (X3 ) as P (X3 |Rx3 = 0) or P (X2 ) as
P (X2 |Rx2 = 0).
3.2
Heuristics for Finding Admissible Factorization
Consider the task of estimating Q = P (X), where X is a set, by searching for an admissible
factorization of P (X) (one that satisfies Theorem 4), possibly by resorting to additional variables,
Z, residing outside of X that serve as separating sets. Since there are exponentially large number
of ordered factorizations, it would be helpful to rule out classes of non-admissible ordering prior
to their enumeration whenever non-admissibility can be detected in the graph. In this section, we
provide lemmata that would aid in pruning process by harnessing information from the graph.
Lemma 1. An ordered set O will not yield an admissible decomposition if there exists a partially
observed variable Vi in the order O which is not marginally independent of RVi such that all minimal
separators (refer Appendix-5.1, Definition-4) of Vi that d-separate it from Rvi appear before Vi .
Proof: Refer Appendix-5.7
7
A
B
D
C
E
RC
X
Y
RY
RX
F
RD
RA
RE
RB
(b)
(a)
Figure 3: demonstrates (a) pruning in Example-7 (b) P (X, Y ) is recoverable in Example-5
Applying lemma-1 requires a solution to a set of disjunctive constraints which can be represented
by directed constraint graphs [5].
Example 7. Let Q = P (X) be the relation to be recovered from the graph in Fig. 3 (a). Let
X = {A, B, C, D, E} and Z = F . The total number of ordered factorizations is 6! = 720.
The independencies implied by minimal separators (as required by Lemma-1) are: A??RA |B,
B?
?RB |?, C?
?RC |{D, E}, ( D?
?RD |A or D??RD |C or D??RD |B ) and (E??RE |{B, F } or
E?
?RE |{B, D} or E?
?RE |C). To test whether (B,A,D,E,C,F) is potentially admissible we need
not explicate all 6 variables; this order can be ruled out as soon as we note that A appears after B.
Since B is the only minimal separator that d-separates A from RA and B precedes A, Lemma-1 is
violated. Orders such as (C, D, E, A, B, F ), (C, D, A, E, B, F ) and (C, E, D, A, F, B) satisfy the
condition stated in Lemma 1 and are potential candidates for admissibility.
The following lemma presents a simple test to determine non-admissibility by specifying the condition under which a given order can be summarily removed from the set of candidate orders that are
likely to yield admissible factorizations.
Lemma 2. An ordered set O will not yield an admissible decomposition if it contains a partially
observed variable Vi for which there exists no set S ? V that d-separates Vi from RVi .
Proof: The factor P (Vi |Vi+1 , . . . , Vn ) corresponding to Vi can never satisfy the condition required
by Theorem 4.
An interesting consequence of Lemma 2 is the following corollary that gives a sufficient condition
under which no ordered factorization can be labeled admissible.
Corollary 2. For any disjoint sets X and Y , there exists no admissible factorization for recovering
the relation P (Y |X) by Theorem 4 if Y contains a partially observed variable Vi for which there
exists no set S ? V that d-separates Vi from RVi .
4
Conclusions
We have demonstrated that causal graphical models depicting the data generating process can serve
as a powerful tool for analyzing missing data problems and determining (1) if theoretical impediments exist to eliminating bias due to data missingness, (2) whether a given procedure produces
consistent estimates, and (3) whether such a procedure can be found algorithmically. We formalized
the notion of recoverability and showed that relations are always recoverable when data are missing
at random (MCAR or MAR) and, more importantly, that in many commonly occurring problems,
recoverability can be achieved even when data are missing not at random (MNAR). We further
presented a sufficient condition to ensure recoverability of a given relation Q (Theorem 1) and operationalized Theorem 1 using graphical criteria (Theorems 2, 3 and 4). In summary, we demonstrated
some of the insights and capabilities that can be gained by exploiting causal knowledge in missing
data problems.
Acknowledgment
This research was supported in parts by grants from NSF #IIS-1249822 and #IIS-1302448 and ONR
#N00014-13-1-0153 and #N00014-10-1-0933
References
[1] P.D. Allison. Missing data series: Quantitative applications in the social sciences, 2002.
8
[2] T. Bu, N. Duffield, F.L. Presti, and D. Towsley. Network tomography on general topologies. In ACM
SIGMETRICS Performance Evaluation Review, volume 30, pages 21?30. ACM, 2002.
[3] E.R. Buhi, P. Goodson, and T.B. Neilands. Out of sight, not out of mind: strategies for handling missing
data. American journal of health behavior, 32:83?92, 2008.
[4] R.M. Daniel, M.G. Kenward, S.N. Cousens, and B.L. De Stavola. Using causal diagrams to guide analysis
in missing data problems. Statistical Methods in Medical Research, 21(3):243?256, 2012.
[5] R. Dechter, I. Meiri, and J. Pearl. Temporal constraint networks. Artificial intelligence, 1991.
[6] C.K. Enders. Applied Missing Data Analysis. Guilford Press, 2010.
[7] U.M. Fayyad. Data mining and knowledge discovery: Making sense out of data. IEEE expert, 11(5):20?
25, 1996.
[8] F. M. Garcia. Definition and diagnosis of problematic attrition in randomized controlled experiments.
Working paper, April 2013. Available at SSRN: http://ssrn.com/abstract=2267120.
[9] R.D. Gill and J.M. Robins. Sequential models for coarsening and missingness. In Proceedings of the
First Seattle Symposium in Biostatistics, pages 295?305. Springer, 1997.
[10] R.D. Gill, M.J. Van Der Laan, and J.M. Robins. Coarsening at random: Characterizations, conjectures, counter-examples. In Proceedings of the First Seattle Symposium in Biostatistics, pages 255?294.
Springer, 1997.
[11] J.W Graham. Missing Data: Analysis and Design (Statistics for Social and Behavioral Sciences).
Springer, 2012.
[12] D.F. Heitjan and D.B. Rubin. Ignorability and coarse data. The Annals of Statistics, pages 2244?2253,
1991.
[13] R.J.A. Little and D.B. Rubin. Statistical analysis with missing data. Wiley, 2002.
[14] B.M. Marlin and R.S. Zemel. Collaborative prediction and ranking with non-random missing data. In
Proceedings of the third ACM conference on Recommender systems, pages 5?12. ACM, 2009.
[15] B.M. Marlin, R.S. Zemel, S. Roweis, and M. Slaney. Collaborative filtering and the missing at random
assumption. In UAI, 2007.
[16] B.M. Marlin, R.S. Zemel, S.T. Roweis, and M. Slaney. Recommender systems: missing data and statistical model estimation. In IJCAI, 2011.
[17] P.E. McKnight, K.M. McKnight, S. Sidani, and A.J. Figueredo. Missing data: A gentle introduction.
Guilford Press, 2007.
[18] Harvey J Miller and Jiawei Han. Geographic data mining and knowledge discovery. CRC, 2009.
[19] K. Mohan and J. Pearl. On the testability of models with missing data. To appear in the Proceedings of
AISTAT-2014; Available at http://ftp.cs.ucla.edu/pub/stat ser/r415.pdf.
[20] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988.
[21] J. Pearl. Causality: models, reasoning and inference. Cambridge Univ Press, New York, 2009.
[22] J. Pearl and K. Mohan. Recoverability and testability of missing data: Introduction and summary of
results. Technical Report R-417, UCLA, 2013. Available at http://ftp.cs.ucla.edu/pub/stat ser/r417.pdf.
[23] C.Y.J. Peng, M. Harwell, S.M. Liou, and L.H. Ehman. Advances in missing data methods and implications
for educational research. Real data analysis, pages 31?78, 2006.
[24] J.L. Peugh and C.K. Enders. Missing data in educational research: A review of reporting practices and
suggestions for improvement. Review of educational research, 74(4):525?556, 2004.
[25] D.B. Rubin. Inference and missing data. Biometrika, 63:581?592, 1976.
[26] D.B. Rubin. Multiple Imputation for Nonresponse in Surveys. Wiley Online Library, New York, NY,
1987.
[27] D.B. Rubin. Multiple imputation after 18+ years. Journal of the American Statistical Association,
91(434):473?489, 1996.
[28] J.L. Schafer and J.W. Graham. Missing data: our view of the state of the art. Psychological Methods,
7(2):147?177, 2002.
[29] F. Thoemmes and N. Rose. Selection of auxiliary variables in missing data problems: Not all auxiliary
variables are created equal. Technical Report Technical Report R-002, Cornell University, 2013.
[30] M.J. Van der Laan and J.M. Robins. Unified methods for censored longitudinal data and causality.
Springer Verlag, 2003.
[31] W. Wothke. Longitudinal and multigroup modeling with missing data. Lawrence Erlbaum Associates
Publishers, 2000.
9
| 4899 |@word briefly:1 eliminating:1 proportion:1 stronger:1 indiscriminate:1 simulation:1 covariance:1 decomposition:7 q1:1 thereby:2 solid:1 accommodate:1 necessity:1 substitution:1 contains:4 exclusively:1 series:1 pub:2 daniel:1 mcar:15 longitudinal:2 existing:1 recovered:2 com:1 must:1 written:1 duffield:1 dechter:1 designed:1 alone:1 intelligence:1 leaf:1 guess:1 item:2 deadlock:1 short:3 record:2 coarse:2 detecting:1 node:3 ames:1 provides:2 characterization:1 firstly:2 rc:2 direct:1 driver:1 supply:1 symposium:2 descendant:1 shorthand:1 behavioral:2 pairwise:3 peng:1 ra:3 behavior:1 nor:2 love:1 ry:41 examine:1 expressibility:1 decomposed:3 automatically:1 karthika:2 little:1 overwhelming:1 inappropriate:1 considering:1 enumeration:1 ehman:1 begin:1 estimating:2 notation:1 underlying:1 moreover:2 biostatistics:2 suffice:1 schafer:1 what:2 substantially:1 q2:1 caution:1 unified:1 finding:2 unobserved:2 marlin:3 guarantee:2 temporal:1 quantitative:1 unexplored:1 every:8 biometrika:1 demonstrates:3 ser:2 unit:1 converse:1 grant:1 yn:2 appear:2 medical:1 positive:4 before:1 limit:1 consequence:2 despite:1 encoding:1 analyzing:1 establishing:1 approximately:1 pam:5 specifying:1 factorization:15 limited:1 tian:1 bi:1 directed:4 acknowledgment:1 responsible:3 enforces:1 practice:3 differs:1 x3:7 cold:1 procedure:9 area:1 elicit:1 significantly:2 dictate:2 word:2 refers:1 cannot:1 selection:2 applying:4 conventional:2 demonstrated:2 missing:63 educational:3 regardless:1 survey:1 amazon:1 decomposable:2 recovery:1 formalized:1 m2:3 estimator:3 rule:2 insight:1 importantly:1 disentangled:1 population:3 unanswered:1 notion:5 searching:1 justification:1 nonresponse:1 annals:1 target:2 play:1 user:3 us:1 associate:1 ignorable:1 labeled:1 observed:20 role:1 coarsened:1 fork:1 disjunctive:1 ordering:1 counter:1 removed:1 yk:1 rose:1 ryi:1 questionnaire:1 testability:3 rewrite:1 serve:2 efficiency:1 completely:2 observables:2 joint:8 various:1 represented:2 univ:3 effective:1 query:4 detected:1 precedes:1 artificial:1 zemel:3 outcome:7 outside:2 harnessing:1 encoded:1 heuristic:1 plausible:1 drawing:1 otherwise:1 statistic:4 final:1 online:3 triggered:1 advantage:1 sequence:1 interaction:1 product:3 rx2:1 flexibility:1 achieve:1 roweis:2 inducing:1 gentle:2 aistat:1 los:4 exploiting:1 convergence:1 parent:9 requirement:3 seattle:2 ijcai:1 produce:2 generating:2 converges:1 ftp:2 derive:1 develop:1 stat:2 pose:1 measured:1 sole:1 eq:8 auxiliary:3 c:4 recovering:5 implies:1 collider:2 safe:1 summarily:1 crc:1 require:1 decompose:1 d00:2 secondly:2 inspecting:1 strictly:4 attrition:2 hold:3 practically:1 considered:1 residing:1 deciding:2 plagued:1 lawrence:1 dseparation:1 major:1 estimation:4 label:1 create:1 tool:1 sensor:1 always:1 sight:1 aim:1 modified:1 rather:1 sigmetrics:1 cornell:1 conjunction:1 corollary:4 encode:2 derived:1 focus:1 improvement:1 consistently:2 likelihood:2 mainly:1 sense:1 detect:1 am:9 inference:8 helpful:1 dependent:1 entire:1 jiawei:1 relation:15 issue:2 among:5 ill:1 classification:1 augment:1 denoted:1 art:1 special:1 equal:1 field:1 never:1 biology:1 represents:1 x4:17 report:5 intelligent:1 employ:2 primarily:2 attempt:1 interest:3 mining:3 evaluation:1 allison:1 tj:2 chain:1 implication:1 edge:4 capable:1 necessary:3 censored:1 ascertained:1 unless:1 incomplete:3 harmful:1 circle:1 re:4 causal:11 ruled:1 theoretical:2 delete:2 complicates:1 minimal:4 psychological:1 classify:1 modeling:2 markovian:2 ar:1 measuring:1 mnar:12 rxi:1 maximization:1 cost:1 subset:1 decomposability:1 aoi:1 erlbaum:1 dependency:1 answer:1 unbiasedness:1 epidemiology:1 stratum:2 randomized:1 bu:1 probabilistic:4 off:1 vm:30 again:1 recorded:1 satisfied:2 possibly:1 woman:1 slaney:2 book:2 american:2 expert:1 explicate:1 account:1 potential:3 harwell:1 singleton:1 de:1 summarized:1 matter:1 satisfy:6 explicitly:3 caused:1 vi:31 guilford:2 ranking:1 view:1 towsley:1 analyze:2 recover:2 capability:1 collaborative:3 kaufmann:1 who:2 miller:1 correspond:1 yield:3 bayesian:1 marginally:1 rx:36 worth:1 whenever:1 definition:9 failure:1 against:1 yob:4 involved:1 associated:2 proof:8 judea:2 dataset:8 treatment:5 popular:1 knowledge:4 car:2 schedule:1 routine:1 actually:1 appears:2 response:1 april:1 sufficiency:1 done:1 though:2 mar:19 furthermore:1 reluctance:1 sketch:1 working:1 reveal:2 validity:1 true:1 unbiased:1 y2:1 former:1 hence:15 equality:1 geographic:1 read:1 death:1 eg:1 comprehend:1 criterion:2 pdf:2 complete:5 vo:34 reasoning:2 common:2 exponentially:1 operationalized:1 volume:1 discussed:2 interpretation:2 m1:3 association:1 refer:7 cambridge:1 dag:4 rd:4 ebay:1 resorting:1 had:1 han:1 own:1 recent:1 showed:1 reverse:1 termed:4 certain:4 n00014:2 harvey:1 verlag:1 onr:1 yi:8 devise:1 der:2 seen:1 analyzes:1 additional:1 morgan:1 gill:2 layman:1 determine:3 ii:3 recoverable:31 multiple:3 liking:1 rv:1 d0:2 technical:3 believed:1 controlled:1 qi:1 prediction:1 basic:1 denominator:1 patient:1 expectation:1 circumstance:1 tailored:1 achieved:1 respondent:4 diagram:1 publisher:1 biased:1 rest:1 contrary:1 coarsening:4 call:2 presence:1 noting:2 rendering:1 switch:1 independence:3 variety:1 topology:1 impediment:1 reduce:1 regarding:1 angeles:4 shopper:3 qj:3 whether:8 expression:2 render:2 york:2 ignored:1 generally:2 detailed:2 amount:2 tomography:2 http:3 exist:2 nsf:1 problematic:1 estimated:4 disjoint:1 algorithmically:1 rb:2 diverse:1 diagnosis:1 write:2 shall:2 independency:6 deleted:4 imputation:7 drawn:1 neither:3 verified:2 vast:1 graph:42 asymptotically:1 merely:2 year:1 sum:1 missingness:26 estimand:2 package:1 powerful:1 communicate:1 reporting:1 reader:1 reasonable:1 decide:1 vn:1 separation:5 draw:1 appendix:8 prefer:1 graham:2 ignorability:1 handful:2 constraint:4 ri:15 software:1 x2:8 ucla:5 generates:1 fayyad:1 pruned:1 performing:1 ssrn:2 conjecture:1 according:2 combination:1 mcknight:2 remain:1 iastate:1 em:1 slightly:1 partitioned:1 evolves:1 modification:1 making:1 taken:1 equation:3 mechanism:17 needed:1 know:1 flip:1 mind:1 liou:1 available:6 permit:3 apply:2 coin:1 rp:7 existence:2 substitute:1 rz:4 assumes:1 denotes:1 ensure:2 graphical:13 testable:1 invokes:1 implied:1 question:5 strategy:1 traditional:1 illuminate:1 said:1 separate:5 separating:1 majority:1 extent:1 relationship:1 difficult:2 taxonomy:4 potentially:2 stated:1 kaplan:1 design:1 recommender:4 observation:4 datasets:3 jin:1 defining:2 witness:1 excluding:1 y1:1 varied:1 recoverability:27 arbitrary:1 rating:2 introduced:1 evidenced:1 meier:1 distorts:1 namely:1 cast:1 required:3 california:2 deletion:14 pearl:6 address:1 proceeds:1 below:1 rx4:3 thoemmes:1 including:1 ia:1 hot:1 power:1 treated:2 event:1 indicator:1 representing:1 brief:1 library:1 identifies:1 created:1 portray:1 health:1 prior:2 literature:3 discovery:3 review:4 determining:1 fully:3 portraying:1 admissibility:3 interesting:2 suggestion:1 filtering:2 proven:1 acyclic:2 age:1 iowa:1 sufficient:6 consistent:7 proxy:2 article:1 rubin:9 succinctly:1 compatible:2 summary:2 supported:2 free:1 truncation:1 accidentally:2 soon:1 formal:1 bias:6 allow:2 guide:1 underwent:2 listwise:8 regard:1 van:2 rich:1 computes:1 author:1 made:2 commonly:1 oftentimes:1 far:1 employing:1 social:4 income:1 sj:1 pruning:2 observable:3 status:1 dealing:1 ml:3 confirm:1 uai:1 conclude:2 xi:8 forbids:1 latent:4 robin:3 additionally:1 nature:1 ca:2 ignoring:3 obtaining:1 depicting:1 qm2:1 separator:3 substituted:1 vj:2 did:1 pari:4 rh:1 arrow:1 profile:1 x1:13 fig:9 causality:2 depicts:2 ny:1 aid:2 wiley:2 shrinking:1 candidate:2 governed:1 answering:1 third:1 admissible:11 removing:1 theorem:25 specific:2 list:1 survival:1 exists:9 sequential:2 gained:1 heitjan:1 mohan:3 portal:1 conditioned:1 occurring:2 depicted:5 garcia:1 likely:4 deck:2 expressed:2 ordered:11 partially:12 springer:4 satisfies:5 acm:4 rvi:14 conditional:5 feasible:2 change:1 typical:1 specifically:1 reducing:1 laan:2 lemma:10 called:6 total:2 exception:1 formally:1 estimable:3 arises:1 violated:2 hollow:1 incorporate:1 dept:3 handling:3 |
4,308 | 49 | 505
CONNECTING TO THE PAST
Bruce A. MacDonald, Assistant Professor
Knowledge Sciences Laboratory, Computer Science Department
The University of Calgary, 2500 University Drive NW
Calgary, Alberta T2N IN4
ABSTRACT
Recently there has been renewed interest in neural-like processing systems, evidenced for example in the two volumes Parallel Distributed Processing edited by Rumelhart and McClelland,
and discussed as parallel distributed systems, connectionist models, neural nets, value passing
systems and multiple context systems. Dissatisfaction with symbolic manipulation paradigms
for artificial intelligence seems partly responsible for this attention, encouraged by the promise
of massively parallel systems implemented in hardware. This paper relates simple neural-like
systems based on multiple context to some other well-known formalisms-namely production
systems, k-Iength sequence prediction, finite-state machines and Turing machines-and presents
earlier sequence prediction results in a new light.
1
INTRODUCTION
The revival of neural net research has been very strong, exemplified recently by Rumelhart
and McClelland!, new journals and a number of meetings G ? The nets are also described as
parallel distributed systems!, connectionist models 2 , value passing systems3 and multiple context
learning systems4 ,5,6,7,8,9. The symbolic manipulation paradigm for artificial intelligence does
not seem to have been as successful as some hoped!, and there seems at last to be real promise
of massively parallel systems implemented in hardware. However, in the flurry of new work it
is important to consolidate new ideas and place them solidly alongside established ones. This
paper relates simple neural-like systems to some other well-known notions-namely production
systems, k-Iength sequence prediction, finite-state machines and Turing machines-and presents
earlier results on the abilities of such networks in a new light.
The general form of a connectionist system lO is simplified to a three layer net with binary
fixed weights in the hidden layer, thereby avoiding many of the difficulties-and challengesof the recent work on neural nets, The hidden unit weights are regularly patterned using a
template. Sophisticated, expensive learning algorithms are avoided, and a simple method is
used for determining output unit weights. In this way we gain some of the advantages of multilayered nets, while retaining some of the simplicity of two layer net training methods. Certainly
nothing is lost in computational power-as I will explain-and the limitations of two layer
nets are not carried over to the simplified three layer one. Biological systems may similarly
avoid the need for learning algorithms such as the "simulated annealing" method commonly
used in connectionist models l l . For one thing, biological systems do not have the same clearly
distinguished training phase.
Briefly, the simplified net b is a production system implemented as three layers of neuron-like
units; an output layer, an input layer, and a hidden layer for the productions themselves. Each
hidden production unit potentially connects a predetermined set of inputs to any output. A
k-Iength sequence predictor is formed once Ie levels of delay unit are introduced into the input
layer. k-Iength predictors are unable to distinguish simple sequences such as ba . .. a and aa ... a
since after Ie or more characters the system has forgotten whether an a or b appeared first. If
the k-Iength predictor is augmented with "auxiliary" actions, it is able to learn this and other
regular languages, since the auxiliary actions can be equivalent to states, and can be inputs to
aAmong them the 1st International Conference on Neural Nets, San Diego,CA, June 21-24, 1987, and this
con.ference.
bRoughly equivalent to a single context system in Andreae's multiple context system 4. 5,6,7,8,9. See also
MacDonald 12 .
@)
American Institute of Physics 1988
506
Figure 1: The general form of a connectionist system 10 .
(a) Form of a unit
(a) Operations within a unit
in~uts ;::; L'" excitation-.I
weIghts
sum
1:.. aCtiVation--W'" output
?--==
Typical F
Typical f
the production units enabling predictions to depend on previous states 7 . By combining several
augmented sequence predictors a Thring machine tape can be simulated along with a finite-state
controller 9 , giving the net the computational power of a Universal Turing machine. Relatively
simple neural-like systems do not lack computational ability. Previous implementations 7,9 of
this ability are production system equivalents to the simplified nets.
1.1
Organization of the paper
The next section briefly reviews the general form of connectionist systems. Section 2 simplifies
this, then section 3 explains that the result is equivalent to a production system dealing only
with inputs and outputs of the net. Section 4 extends the simplified version, enabling it to learn
to predict sequences. Section 5 explains how the computational power of the sequence predictor
can be increased to that of a Thring machine if some input units receive auxiliary actions; in fact
the system can learn to be a TUring machine. Section 6 discusses the possibility of a number of
nets combining their outputs, forming an overall net with "association areas".
1.2
General form of a connectionist system
Figure 1 shows the general form of a connectionist system unit, neuron or ce1l 10 . In the figure
unit i has inputs, which are the outputs OJ of possibly all units in the network, and an output of
its own, 0i' The net input excitation, net" is the weighted sum of inputs, where !Vij is the weight
connecting the output from unit j as an input to unit i. The activation, ai of the unit is some
function Fi of the net input excitation. Typically Fi is semilinear, that is non-decreasing and
differentiable 13 , and is the same function for all, or at least large groups of units. The output is
a function fi of the activation; typically some kind of threshold function. I will assume that the
quantities vary over discrete time steps, so for example the activation at time t + 1 is ai (t + 1)
and is given by Fi((neti(t)).
In general there is no restriction on the connections that may be made between units.
Units not connected directly to inputs or outputs are hidden units. In more complex nets
than those described in this paper, there may be more than one type of connection. Figure 2
shows a common connection topology, where there are three layers of units-input, hidden and
output-with no cycles of connection.
The net is trained by presenting it with input combinations, each along with the desired
output combination. Once trained the system should produce the desired outputs given just
507
Figure 2: The basic structure of a three layer connectionist system.
input units
hidden
units
output units
inputs . During training the weights are adjusted in some fashion that reduces the discrepancy
between desired and actual output. The general method is lO :
(1)
where t; is the desired, "training" activation . Equation 1 is a general form of Hebb's classic
rule for adjusting the weight between two units with high activations lO ? The weight adjustment
is the product of two functions, one that depends on the desired and actual activations--often
just the difference-and another that depends on the input to that weight and the weight itself.
As a simple example suppose 9 is the difference and h as just the output OJ. Then the weight
change is the product of the output error and the input excitation to that weight:
where the constant T} determines the learning rate. This is the Widrow-Hoff or Delta rule which
may be used in nets without hidden units. 1o
The important contribution of recent work on connectionist systems is how to implement
equation 1 in hidden units; for which there are no training signals ti directly available . The
Boltzmann learning method iteratively varies both weights and hidden unit training activations
using the controlled, gradually decreasing randomizing method "simulated annealing" 14. Backpropagation 13 is also iterative, performing gradient descent by propagating training signal errors
back through the net to hidden units. I will avoid the need to determine training signals for
hidden units, by fixing the weights of hidden units in section 2 below.
2
SIMPLIFIED SYSTEM
Assume these simplifications are made to the general connectionist system of section 1.2:
1. The system has three layers, with the topology shown in Figure 2 (ie no cycles)
2. All hidden layer unit weights are fixed, say at unity or zero
3. Each unit is a linear threshold unit lO , which means the activation function for all units
is the identity function, giving just net;, a weighted sum of the inputs, and the output
function is a simple binary threshold of the form:
!- I
output
threshold /
?
activation
508
so that the output is binary; on or oft'. Hidden units will have thresholds requiring all
inputs to be active for the output to be active (like an AND gate) while output units will
have thresholds requiring only 1 or two active highly weighted inputs for an output to be
generated (like an OR gate). This is in keeping with the production system view of the
net, explained in section 3.
4. Learning-which now occurs only at the output unit weights-gives weight adjustments
according to:
Wij
Wij
1
if ai =
OJ
=1
0 otherwise
so that weights are turned on if their input and the unit output are on, and off otherwise.
That is, Wij = ai A OJ. A simple example is given in Figure 3 in section 3 below.
This simple form of net can be made probabilistic by replacing 4 with 4' below:
4'. Adjust weights so that Wij estimates the conditional probability of the unit i output being
on when output j is on. That is,
Wij
= estimate of P(odoj).
Then, assuming independence of the inputs to a unit, an output unit is turned on when the
conditional probability of occurrence of that output exceeds the threshold of the output
function.
Once these simplifications are made, there is no need for learning in the hidden units. Also no
iterative learning is required; weights are either assigned binary values, or estimate conditional
probabilities. This paper presents some of the characteristics of the simplified net. Section 6
discusses the motivation for simplifying neural nets in this way.
3
PRODUCTION SYSTEMS
The simplified net is a kind of simple production system. A production system comprises a
global database, a set of production rules and a control system 15 . The database for the net is
the system it interacts with, providing inputs as reactions to outputs from t.he net. The hidden
units of the network are the production rules, which have the form
IF
precondition
THEN
action
The precondition is satisfied when the input excitation exceeds the threshold of a hidden unit.
The actions are represented by the output units which the hidden production units activate.
The control system of a production system chooses the rule whose action to perform, from the
set of rules whose preconditions have been met. In a neural net the control system is distributed
throughout the net in the output units. For example, the output units might form a winner-takeall net. In production systems more complex control involves forward and backward chaining to
choose actions that seek goals . This is discussed elsewhere4.12.16. Figure 3 illust.rates a simple
production implemented as a neural net. As the figure shows, the inputs to hidden units are
just the elements of the precondition. When the appropriate input combination is present the
associated hidden (production) unit is fired. Once weights have been leamed connecting hidden
units to output units, firing a production results in output. The simplified neural net is directly
equivalent to a production system whose elements are inputs and outputs e .
Some production systems have symbolic elements, such as variables, which can be given
values by production actions. The neural net cannot directly implement this, since it can
have outputs only from a predetermined set. However, we will see later that extensions t.o the
framework enable this and other abilities.
CThis might be referred to as a "sensory-motor" production system, since when implemented ill a l'eal system
such as a robot, it deals only with sensed inputs and executable motor actions, which may include the auxiliary
actions of section 4.3.
509
Figure 3: A production implemented in a simplified neural net .
(a) A production rule
rr==~--r=======~~~==~
IF
Icloudy I Ipressure falling I
AND
THEN
Iit will rain I
(b) The rule implemented as a hidden unit. The threshold of the hidden unit is 2 so it is.
an AND gate. The threshold of the output unit is 1 so it is an OR gate. The learned
weight will be 0 or 1 if the net is not probabilistic, otherwise it will be an estimate of
P(it will rainlclouds AND pressure falling)
It will
rain
weight
Figure 4: A net that predicts the next character in a sequence, based on only the last character .
(a) The net . Production units (hidden units) have been combined with input units.
For example this net could predict the sequence abcabcabc . . .. Productions have the
form : IF last character is . .. THEN next character will be . . .. The learning rule is
Wij
1 if (inputj AND outputi). Output is ai
~R WijOj
=
=
input
neural net
output
a
a
b
c
b
c
(b) Learning procedure.
1. Clamp inputs and outputs to desired values
2. System calculates weight values
3. Repeat 4 and 4 for all required input/output combinations
4
SEQUENCE PREDICTION
A production system or neural net can predict sequences. Given examples of a repeating sequence, productions are learned which predict future events on the basis of recent ones . Figure 4
shows a trivially simple sequence predictor. It predicts the next character of a sequence based
on the previous one. The figure also gives the details of the learning procedure for the simplified
net. The net need be trained only once on each input combination, then it will "predict" as
an output every character seen after the current one. The probabilistic form of the net would
estimate conditional probabilities for the next character , conditional on the current one. Many
510
Figure 5: Using delayed inputs, a neural net can implement a k-length sequence predictor.
(a) A net with the last three characters as input.
input
hidden
output
a':""'"-......:;;;;;;;::::~;:-
a
{ a'
a
b':-;-'----..",
{:';;g 'J.,o
b
e"
{e'_~
0-"
e -_ _ _
c
z
2nd last
(b) An example production.
~----------------------------------,
IF
last three characters were ~ THEN
0
presentations of each possible character pair would be needed to properly estimate the probabilities. The net would be learning the probability distribution of character pairs. A predictor like
the one in Figure 4 can be extended to a general k-Iength 17 predictor so long as inputs delayed
by 1,2, ... , k steps are available. Then, as illustrated in Figure 5 for 3-length prediction, hidden
production units represent all possible combinations of k symbols. Again output weights are
trained to respond to previously seen input combinations, here of three characters. These delays
can be provided by dedicated neural nets d , such as that shown in Figure 6. Note that the net
is assumed to be synchronously updated, so that the input from feedback around units is not
changed until one step after the output changes. There are various ways of implementing delay
in neurons, and Andreae 4 investigates some of them for the same purpose-delaying inputs-in
a more detailed simulation of a similar net.
4.1
Other work on sequence prediction in neural nets
Feldman and Ballard 2 find connectionist systems initially not suited to representing changes
with time. One form of change is sequence, and they suggest two methods for representing
sequence in nets. The first is by units connected to each other in sequence so that sequential
tasks are represented by firing these units in succession. The second method is to buffer the
inputs in time so that inputs from the recent past are available as well as current inputs; that
is, delayed inputs are available as suggested above. An important difference is the necessary
length of the buffer; Feldman and Ballard suggest the buffer be long enough to hold a phrase of
natural language, but I expect to use buffers no longer than about 7, after Andreae 4 . Symbolic
inputs can represent more complex information effectively giving the length seven buffers more
information than the most recent seven simple inputs, as discussed in section 5.
The method of back-propagation13 enables recurrent networks to learn sequential tasks in a
dFeldman and Ballard2 give some dedicated neural net connections for a variety of flUlctions
511
Figure 6: Inputs can be delayed by dedicated neural subnets. A two stage delay is shown.
(a) Delay network.
(b) Timing diagram for (a).
--.r-
1.0
IL..-_-_-_-_-_-.:-_-.:-_-.:-_-_-_-_..._ tml
_?_e_
B -r-0.5
0.75 ~ 0.375 ....,L-_ _ __
A
C
D
__________
E
original signal
delay of one step
~r--------r--------~------~L
delay of two steps
manner similar to the first suggestion in the last paragraph, where sequences of connected units
represent sequenced events. In one example a net learns to complete a sequence of characters;
when given the first two characters of a six character sequence the next four are output. Errors
must be propagated around cycles in a recurrent net a number of times.
Seriality may also be achieved by a sequence of states of distributed activation 18. An example
is a net playing both sides of a tic-tac-toe game 18 . The sequential nature of the net's behavior is
derived from the sequential nature of the responses to the net's actions; tic-tac-toe moves. A net
can model sequence internally by modeling a sequential part of its environment. For example,
a tic-tac-toe playing net can have a model of its opponent.
k-Iength sequence predictors are unable to learn sequences which do not repeat more frequently that every k characters. Their k-Iength context includes only information about the last
k events. However, there are two ways in which information from before the kth last input can
be retained in the net. The first method latches some inputs, while the second involves auxiliary
actions.
4.2
Latch units
Inputs can be latched and held indefinitely using the combination shown in Figure 7. Not all
inputs would normally be latched. Andreae 4 discusses this technique of "threading" latched
events among non-latched events, giving the net both information arbitrarily far back in its
input-output history and information from the immediate past . Briefly, the sequence ba . .. a
can be distinguished from aa ... a if the first character is latched . However, this is an ad hoc
solution to this problem e .
4-3
Auxiliary actions
When an output is fed back into the net as an input signal, this enables the system to choose the
next output at least partly based on the previous one, as indicated in Figure 8. If a particular
fed back output is also one without external manifestation, or whose external manifestation
is independent of the task being performed, then that output is an auxiliary action. It Las
"The interested reader should refer to Andreae 4 where more extensive analysis is given.
512
Figure 7: Threading. A latch circuit remembers an event until another comes along. This is a
two input latch, e.g. for two letters a and b, but any number of units may be similarly connected.
It is formed from a mutual inhibition layer, or winner-take-all connection, along with positive
feedback to keep the selected output activated when the input disappears.
a
b---;~..!!J
Figure 8: Auxiliary actions-the S outputs-are fed back to the inputs of a net, enabling the
net to remember a state. Here both part of a net and an example of a production are shown.
There are two types of action, characters and S actions.
Sinputs
S outputs
character inputs
IF
S input is
[?l] and character input is 0
character outputs
THEN
output character
lliJ and S [ill
no direct effect on the task the system is performing since it evokes no relevant inputs, and
so can be used by the net as a symbolic action. If an auxiliary action is latched at the input
then the symbolic information can be remembered indefinitely, being lost only when another
auxiliary action of that kind is input and takes over the latch. Thus auxiliary actions can act
like remembered states; the system performs an action to "remind" itself to be in a particular
state. The figure illustrates this for a system that predicts characters and state changes given
the previous character and state. An obvious candidate for auxiliary actions is speech. So
the blank oval in the figure would represent the net's environment, through which its own
speech actions are heard. Although it is externally manifested, speech has no direct effect on
our physical interactions with the world. Its symbolic ability not only provides the power of
auxiliary actions, but also includes other speakers in the interaction.
5
SIMULATING ABSTRACT AUTOMATA
The example in Figure 8 gives the essence of simulating a finite state automaton with a production system or its neural net equivalent . It illustrates the transition function of an automaton;
the new state and output are a function of the previous state and input. Thus a neural net can
simulate a finite state automaton, so long as it has additional, auxiliary actions.
A Thring machine is a finite state automaton controller plus an unbounded memory. A
neural net could simulate a 'lUring machine in two ways, and both ways have been demonstrated
with production system implementations-equivalent to neural nets----(;alled "multiple context
learning systems"', briefly explained in section 6. The first Thring machine simulation 7 has the
system simulate only the finite state controller, but is able to use an unbounded external memory
fSee John Andreae's and his colleagues' work4 ,5,6,7,8,9,12 ,16
513
Figure 9: Multiple context learning system implementation as multiple neural nets. Each:3
layer net has the simplified form presented above, with a number of elaborations such as extra
connections for goal-seeking by forward and backward chaining .
Output
channels
from the real world, much like the paper of Turing's original work 19 . The second simnlat.ion[" 1'2
embeds the memory in the multiple context learning system, along with a counter for accessing
this simulated memory. Both learn all the productions-equivalent to learning output unit
weights-required for the simulations. The second is able to add internal memory as required,
up to a limit dependent on the size of the network (which can easily be large enough to allow 70
years of computation!). The second could also employ external memory as the first did. Briefly,
the second simulation comprised multiple sequence predictors which predicted auxiliary actions
for remembering the state of the controller, and the current memory position . The memory
element is updated by relearning the production representing that element; the precondition is
the address and the production action the stored item.
6
MULTIPLE SYSTEMS FORM ASSOCIATION AREAS
A multiple context learning system is production system version of a multiple neural net, although a simple version has been implemented as a simulated net 4 ?20 . It effectively comprises
several nets--or "association" areas-which may have outputs and inputs in common, as indicated in Figure 9. Hidden unit weights are specified by templates ; one for each net . A template
gives the inputs to have a zero weight for the hidden units of a net and the inputs to have a
weight of unity. Delayed and latched inputs are also available . The actual outputs are selected
from the combined predictions of the nets in a winner-take-all fashion .
I see the design for real neural nets, say as controllers for real robots, requiring a large
degree of predetermined connectivity. A robot controller could not be one three layer net wit.h
every input connected to every hidden unit in turn connected to every output. There will
need to be some connectivity constraints so the net reflects the functional specialization in the
control requirements 9 . The multiple context learning system has all the hidden layer connections
predetermined, but allows output connections to be learned. This avoids the "credit assignment"
problem and therefore also the need for learning algorithms such as Boltzmann learning and
back-propagation. However, as the multiple context learning system has auxiliary actions, and
delayed and latched inputs, it does not lack computational power. Future work in this area
should investigate , for example, the ability of different kinds of nets to learn auxiliary act.ions.
This may be difficult as symbolic actions may not be provided in training inputs and output.s .
9 For
example a controller for a robot body would have to deal with vision, manipulation, motion, etc.
514
7
CONCLUSION
This paper has presented a sImplified three layer connectionist model, with fixed weights for
hidden units, delays and latches for inputs, sequence prediction ability, auxiliary "state" actions,
and the ability to use internal and external memory. The result is able to learn to simulate a
Turing machine. Simple neural-like systems do not lack computational power.
ACKNOWLEDGEMENTS
This work is supported by the Natural Sciences and Engineering Council of Canada.
REFERENCES
1. Rumelhart,D.E. and McClelland,J .L . Parallel distributed processing. Volumes 1 and 2. MIT
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Press. (1986)
Feldman,J .A. and Ballard,D.H. Connectionist models and their properties. Cognitive Science
6, pp.205-254. (1982)
Fahlman,S .E. Three Flavors of Parallelism. Proc.4th Nat.Conf. CSCSI/SCSEIO, Saskatoon.
(1982)
Andreae,J .H. Thinking with the teachable machine. Academic Press. (1977)
Andreae,J.H. Man-Machine Studies Progress Reports UC-DSE/1-28. Dept Electrical and
Electronic Engineering, Univ. Canterbury, Christchurch, New Zealand. editor. (1972-87)
(Also available from NTIS, 5285 Port Royal Rd, Springfield, VA 22161)
Andreae,J .H. and Andreae,P.M. Machine learning with a multiple context. Proc.9th
Int.Conf.on Cybernetics and Society. Denver. October. pp.734-9. (1979)
Andreae,J.H. and Cleary,J.G. A new mechanism for a brain. Int.J.Man-Machine Studies
8(1): pp.89-1l9. (1976)
Andreae,P.M. and Andreae,J.H. A teachable machine in the real world. Int.J.Man-Machine
Studies 10: pp.301-12. (1978)
MacDonald,B.A. and Andreae,J .H. The competence of a multiple context learning system.
Int.J.Gen.Systems 7: pp.123-37. (1981)
Rumelhart,D.E., Hinton,G.E. and McClelland,J .L. A general framework for parallel distributed processing. chapter 2 in Rumelhart and McClelland l , pp.45-76. (1986)
Hinton,G.E. and Sejnowski,T.L. Learning and relearning in Boltzmann machines . chapter 7
in Rumelhart and McClelland l , pp.282-317. (1986)
MacDonald,B.A. Designing teachable robots. PhD thesis, University of Canterbury,
Christchurch, New Zealand. (1984)
Rumelhart,D.E., Hinton,G.E. and Williams,R.J. Learning Internal Representations by Error
Propagation. chapter 8 in Rumelhart and McClelland l , pp.318-362. (1986)
Ackley,D.H., Hinton,G.E. and Sejnowski,T.J. A Learning Algorithm for Boltzmann Machines. Cognitive Science 9, pp.147-169. (1985)
Nilsson,N.J. Principles of Artificial Intelligence. Tioga. (1980)
Andreae,J .H. and MacDonald,B~_A. Expert control for a robot body. Research Report
87/286/34 Dept. of Computer Science, University of Calgary, Alberta, Canada, T2N-1N4.
(1987)
Witten,I.H. Approximate, non-deterministic modelling of behaviour sequences. Int. 1. General Systems, vol. 5 pp.1-12 . (1979)
Rumelhart,D.E.,Smolensky,P.,McClelland,J.L. and Hinton,G .E. Schemata and Sequential
thought Processes in PDP Models. chapter 14, vol 2 in Rumelhart and McClelland 1 . pp.757. (1986)
Thring,A.M. On computable numbers, with an application to the entscheidungsproblem.
Proc. London Math. Soc. vol 42(3). pp. 230-65. (1936)
Dowd,R.B. A digital simulation of mew-brain. Report no. UC-DSE/10 5 . pp.25-46. (1977)
| 49 |@word version:3 briefly:5 seems:2 nd:1 seek:1 sensed:1 simulation:5 simplifying:1 pressure:1 thereby:1 cleary:1 t2n:2 andreae:15 renewed:1 past:3 reaction:1 current:4 blank:1 activation:11 must:1 john:1 predetermined:4 enables:2 motor:2 intelligence:3 selected:2 item:1 leamed:1 indefinitely:2 provides:1 math:1 inputj:1 unbounded:2 along:5 direct:2 paragraph:1 manner:1 behavior:1 themselves:1 frequently:1 brain:2 decreasing:2 alberta:2 actual:3 provided:2 circuit:1 tic:3 kind:4 forgotten:1 remember:1 every:5 ti:1 act:2 control:6 unit:70 internally:1 normally:1 before:1 positive:1 engineering:2 timing:1 limit:1 firing:2 might:2 plus:1 tml:1 patterned:1 responsible:1 lost:2 implement:3 backpropagation:1 procedure:2 area:4 universal:1 thought:1 regular:1 suggest:2 symbolic:8 cannot:1 context:14 restriction:1 equivalent:8 deterministic:1 demonstrated:1 williams:1 attention:1 automaton:5 wit:1 simplicity:1 zealand:2 rule:9 his:1 classic:1 notion:1 updated:2 diego:1 suppose:1 designing:1 element:5 rumelhart:10 expensive:1 predicts:3 database:2 ackley:1 electrical:1 precondition:5 revival:1 connected:6 cycle:3 counter:1 edited:1 accessing:1 environment:2 flurry:1 luring:1 trained:4 depend:1 basis:1 easily:1 iit:1 represented:2 various:1 chapter:4 univ:1 activate:1 ntis:1 sejnowski:2 artificial:3 london:1 whose:4 say:2 otherwise:3 ability:8 itself:2 hoc:1 sequence:31 advantage:1 differentiable:1 net:86 rr:1 clamp:1 interaction:2 product:2 turned:2 combining:2 relevant:1 tioga:1 gen:1 fired:1 requirement:1 produce:1 widrow:1 recurrent:2 fixing:1 subnets:1 propagating:1 progress:1 strong:1 soc:1 implemented:8 auxiliary:18 involves:2 come:1 predicted:1 met:1 enable:1 implementing:1 explains:2 behaviour:1 biological:2 adjusted:1 extension:1 cscsi:1 hold:1 around:2 credit:1 nw:1 predict:5 vary:1 purpose:1 assistant:1 proc:3 council:1 weighted:3 reflects:1 mit:1 clearly:1 latched:8 avoid:2 derived:1 june:1 properly:1 modelling:1 dependent:1 typically:2 initially:1 hidden:32 springfield:1 wij:6 interested:1 overall:1 among:1 ill:2 retaining:1 ference:1 hoff:1 mutual:1 uc:2 once:5 encouraged:1 thinking:1 discrepancy:1 future:2 connectionist:14 report:3 employ:1 delayed:6 phase:1 connects:1 organization:1 interest:1 possibility:1 highly:1 investigate:1 certainly:1 adjust:1 light:2 activated:1 held:1 necessary:1 desired:6 increased:1 formalism:1 earlier:2 eal:1 modeling:1 assignment:1 phrase:1 predictor:11 comprised:1 delay:8 successful:1 stored:1 varies:1 randomizing:1 chooses:1 combined:2 st:1 international:1 ie:3 probabilistic:3 physic:1 off:1 e_:1 connecting:3 connectivity:2 again:1 thesis:1 satisfied:1 choose:2 possibly:1 external:5 cognitive:2 american:1 conf:2 expert:1 includes:2 int:5 depends:2 ad:1 later:1 view:1 performed:1 schema:1 parallel:7 bruce:1 contribution:1 formed:2 il:1 characteristic:1 succession:1 drive:1 cybernetics:1 history:1 explain:1 colleague:1 pp:13 obvious:1 toe:3 associated:1 con:1 propagated:1 gain:1 adjusting:1 knowledge:1 ut:1 sophisticated:1 back:7 response:1 just:5 stage:1 until:2 replacing:1 lack:3 propagation:2 indicated:2 effect:2 requiring:3 assigned:1 laboratory:1 iteratively:1 illustrated:1 deal:2 latch:6 during:1 game:1 essence:1 speaker:1 excitation:5 chaining:2 manifestation:2 presenting:1 complete:1 performs:1 dedicated:3 motion:1 recently:2 fi:4 common:2 witten:1 functional:1 executable:1 physical:1 denver:1 winner:3 volume:2 discussed:3 association:3 he:1 in4:1 refer:1 feldman:3 ai:5 tac:3 rd:1 trivially:1 similarly:2 language:2 robot:6 longer:1 inhibition:1 etc:1 add:1 own:2 recent:5 massively:2 manipulation:3 buffer:5 manifested:1 binary:4 arbitrarily:1 remembered:2 meeting:1 seen:2 additional:1 remembering:1 determine:1 paradigm:2 signal:5 relates:2 multiple:16 reduces:1 exceeds:2 academic:1 long:3 elaboration:1 controlled:1 calculates:1 prediction:9 va:1 basic:1 controller:7 vision:1 represent:4 sequenced:1 achieved:1 ion:2 receive:1 annealing:2 diagram:1 extra:1 thing:1 regularly:1 seem:1 enough:2 variety:1 independence:1 topology:2 idea:1 simplifies:1 computable:1 whether:1 six:1 specialization:1 speech:3 passing:2 tape:1 action:31 heard:1 detailed:1 repeating:1 hardware:2 mcclelland:9 semilinear:1 delta:1 discrete:1 promise:2 vol:3 group:1 four:1 threshold:10 falling:2 teachable:3 backward:2 sum:3 year:1 turing:6 letter:1 respond:1 place:1 extends:1 throughout:1 reader:1 evokes:1 electronic:1 consolidate:1 investigates:1 layer:19 distinguish:1 simplification:2 constraint:1 l9:1 simulate:4 performing:2 relatively:1 department:1 according:1 combination:8 character:25 unity:2 n4:1 nilsson:1 explained:2 gradually:1 equation:2 previously:1 discus:3 turn:1 mechanism:1 neti:1 needed:1 fed:3 available:6 operation:1 opponent:1 takeall:1 appropriate:1 simulating:2 occurrence:1 distinguished:2 gate:4 original:2 rain:2 include:1 giving:4 society:1 threading:2 seeking:1 move:1 quantity:1 occurs:1 christchurch:2 interacts:1 gradient:1 kth:1 unable:2 macdonald:5 simulated:5 seven:2 assuming:1 length:4 retained:1 remind:1 providing:1 difficult:1 october:1 potentially:1 ba:2 implementation:3 design:1 boltzmann:4 perform:1 neuron:3 finite:7 enabling:3 descent:1 immediate:1 extended:1 hinton:5 delaying:1 pdp:1 synchronously:1 competence:1 calgary:3 canada:2 introduced:1 evidenced:1 namely:2 required:4 pair:2 extensive:1 connection:9 specified:1 learned:3 established:1 address:1 able:4 suggested:1 alongside:1 below:3 exemplified:1 parallelism:1 appeared:1 oft:1 smolensky:1 oj:4 memory:9 royal:1 power:6 event:6 difficulty:1 natural:2 representing:3 disappears:1 carried:1 remembers:1 iength:8 review:1 acknowledgement:1 determining:1 expect:1 suggestion:1 limitation:1 digital:1 degree:1 port:1 editor:1 vij:1 principle:1 playing:2 production:39 lo:4 changed:1 repeat:2 last:9 keeping:1 supported:1 fahlman:1 side:1 allow:1 institute:1 template:3 distributed:7 feedback:2 world:3 transition:1 avoids:1 sensory:1 forward:2 commonly:1 made:4 san:1 simplified:13 avoided:1 far:1 approximate:1 keep:1 dealing:1 llij:1 global:1 active:3 assumed:1 iterative:2 learn:8 ballard:3 nature:2 ca:1 channel:1 complex:3 did:1 multilayered:1 motivation:1 nothing:1 body:2 augmented:2 referred:1 fashion:2 hebb:1 embeds:1 position:1 comprises:2 candidate:1 canterbury:2 learns:1 externally:1 symbol:1 illust:1 sequential:6 effectively:2 phd:1 hoped:1 nat:1 illustrates:2 relearning:2 mew:1 flavor:1 dse:2 suited:1 dissatisfaction:1 forming:1 adjustment:2 aa:2 determines:1 conditional:5 identity:1 goal:2 presentation:1 professor:1 man:3 change:5 typical:2 dowd:1 oval:1 partly:2 la:1 internal:3 dept:2 avoiding:1 |
4,309 | 490 | Networks for the Separation of Sources
that are Superimposed and Delayed
John C. Platt
Federico Faggin
Synaptics, Inc.
2860 Zanker Road, Suite 206
San Jose, CA 95134
ABSTRACT
We have created new networks to unmix signals which have been
mixed either with time delays or via filtering. We first show that
a subset of the Herault-Jutten learning rules fulfills a principle of
minimum output power. We then apply this principle to extensions
of the Herault-Jutten network which have delays in the feedback
path. Our networks perform well on real speech and music signals
that have been mixed using time delays or filtering.
1
INTRODUCTION
Recently, there has been much interest in neural architectures to solve the "blind
separation of signals" problem (Herault & Jutten, 1986) (Vittoz & Arreguit, 1989).
The separation is called "blind," because nothing is assumed known about the
frequency or phase of the signals.
A concrete example of blind separation of sources is when the pure signals are sounds
generated in a room and the mixed signals are the output of some microphones.
The mixture process would model the delay of the sound to each microphone, and
the mixing of the sounds at each microphone. The inputs to the neural network
would be the microphone outputs, and the neural network would try to produce
the pure signals.
The mixing process can take on different mathematical forms in different situations.
To express these forms, we denote the pure signal i as Pi, the mixed signal i as Ii
(which is the ith input to the network), and the output signal i as Oi.
The simplest form to unmix is linear superposition:
lj(t) = Pi(t)
730
+ L Mjj(t)Pj(t).
j#
(1)
Networks for the Separation of Sources that are Superimposed and Delayed
A more realistic, but more difficult form to unmix is superposition with single delays:
l i (f) = Pi(t)
+ L Mij(t)Pj(t j
Djj(t)).
(2)
i-i
Finally, a rather general mixing process would be superposition with causal filtering:
li(t) = Pi(t)
+L
L
ji-i
k
M ijk(t)Pj (t - 15 k).
(3)
Blind separation is interesting for many different reasons . The network must adapt
on-line and without a supervisor , which is a challenging type of learning. One
could imagine using a blind separation network to clean up an input to a speech
understanding system. (Juttell & Herault, 1991) uses a blind separation network
to deskew images . Finally, researchers have implemented blind separation networks
using analog VLSI to yield systems which are capable of performing the separation
of sources in real time (Vittoz & Arreguit, 1990) (Cohen, et. al., 1992).
1.1
Previous Work
Interest in adaptive systems which perform noise cancellation dates back to the
1960s and 1970s (Widrow, et. al., 1975). The first neural network to un mix on-line
a linear superposition of sources was (Herault & Jutten, 1986). Further work on
off-line blind separation was performed by (Cardoso, 1989). Recently, a network to
unmix filtered signals was proposed in (Jutten, et. al., 1991), independently of this
paper .
2
PRINCIPLE OF MINIMUM OUTPUT POWER
In this section, we apply the mathematics of noise-cancelling networks (Widrow ,
et . al. , 1975) to the network in (Herault & Jutten, 1986) in order to generalize to
new networks that can handle delays in the mixing process .
2.1
Noise-cancellation Networks
A noise-cancellation network tries to purify a signal which is corrupted by filtered
noise (Widrow, et. al. , 1975) . The network has access to the isolated noise signal.
The interference equation is
1(t)
= P(t) + L
MjN(t - 8j
).
(4)
j
The adaptive filter inverts the interference equation , to yield an output:
O(t)
= 1(t) -
L C N(t - 8
j
j ).
(5)
j
The adaptation of a noise-cancellation network relies on an elegant notion : if a
signal is impure, it will have a higher power than a pure signal, because the noise
power adds to the signal power. The true pure signal has the lowest power. This
minimum output power principle is used to determine adaptation laws for noisecancellation networks. Specifically, at any time t , Cj is adjusted by taking a step
that minimizes 0(t)2
731
732
Platt and Faggin
Figure 1: The network described in (Herault & Jutten, 1986). The dashed arrows
represent adaptation.
2.2
The Herault-Jutten Network
The Herault-Jutten network (see Figure 1) uses a purely additive model of interference. The interference is modeled by
Ii
= Pi + LMijPj.
(6)
j ,#-i
Notice the Herault-Jutten network solves a more general problem than previous
noise-cancellation networks: the Herault-Jutten network has no access to any pure
signal.
In (Herault & Jutten, 1986), the authors also propose inverting the interference
model:
(7)
OJ = Ii GijOj .
L:
j ,#-i
The Herault-Jutten network can be understood intuitively by assuming that the
network has already adapted so that the outputs are the pure signals (OJ = Pj ).
Each connection Gij subtracts just the right amount of the pure signal Pj from the
input Ii to yield the pure signal Pi. So, the Herault-J utten network will produce
pure signals if the Gij = M ij .
In (Herault & Jutten, 1986), the authors propose a very general adaptation rule for
the Gij:
(8)
for some non-linear functions
converges for f(x) = x 3 .
f and
g.
(Sorouchyari, 1991) proves that the network
In this paper, we propose that the same elegant minimization principle that governs
the noise-cancellation networks can be used to justify a subset of Herault-Jutten
Networks for the Separation of Sources that are Superimposed and Delayed
learning algorithms. Let g(x) = x and f(x) be a derivative of some convex function
h(x), with a minimum at x
O. In this case, each output of the Hcrault-Jutten
network independently minimizes a function h(x) .
=
=
A Herault-Jutten network can be made by setting h(x)
x 2 . Unfortunately, this
network will not converge, because the update rules for two connections G ij and
Gji are identical:
(9)
Under this condition, the two parameters Gij and Gji will track one another and not
converge to the correct answer. Therefore, a non-linear adaptation rule is needed
to break the symmetry between the outputs.
The next two sections of the paper describe how the minimum output power principle can be applied to generalizations of the Herault-J utten architecture.
3
NETWORK FOR UNMIXING DELAYED SIGNALS
Figure 2:
Our network for unmixing signals mixed with single delays. The
adjustable delay in the feedback path avoids the degeneracy in the learning rule.
The dashed arrows represent adaptation: the source of the arrow is the source of
the error used by gradient descent.
Our new network is an extension of the Herault-Jutten network (see Figure 2). We
assume that the interference is delayed by a certain amount:
Ii(t) = Pi(t) +
L: Mij Pj (t -
Djj (t?).
(10)
i:j:.j
Compare this to equation (6): our network can handle delayed interference, while
the Herault-Jutten network cannot. We introduce an adjustable delay in the feedback path in order to cancel the delay of the interference:
Oi(t) = I(t) -
L: GijOj(t i:j:.j
djj(t)).
(11)
733
734
Platt and Faggin
We apply the minimum output power principle to adapt the mixing coefficients Gij
and the delays dij :
~Gij(t)
=
aOi(t)Oj(t - dij(t)),
(12)
dO ?
~dij(t) = -f3Gij (t)Oj(t) d/ (t - djj(t)) .
By introducing a delay in the feedback, we prevent degeneracy in the learning rule,
hence we can use a quadratic power to adjust the coefficients .
....
0
1-0 ....
<1)0
~
0 0
.
0..
-0 ....
~S
~o
>
<"<:t
<1)b
.? ....
.....
tv)
Ob
I
65 ....
\0
b
.... 0
1
2
3
4
Time (sec)
5
6
7
Figure 3: The results of the network applied to a speech/music superposition.
These curves are short-time averages of the power of signals. The upper curve shows
the power of the pure speech signal. The lower curve shows the power of the difference between the speech output of the network, and the pure speech signal. The
gap between the curves is the amount that the network attenuates the interference
between the music and speech: the adaptation of the network tries to drive the
lower curve to zero. As you can see, the network quickly isolates the pure speech
signal.
For a test of our network, we took two signals, one speech and one music, and
mixed them together via software to form two new signals: the first being speech
plus a delayed, attenuated music; the second being music plus delayed, attenuated
speech. Figure 3 shows the results of our network applied to these two signals: the
interference was attenuated by approximately 22 dB. One output of the network
sounds like speech, with superimposed music which quickly fades away. The other
output of the network sounds like music, with a superimposed speech signal which
quickly fades away.
Our network can also be extended to more than two sources, like the Herault-Jutten
network. If the network tries to separate S sources, it requires S non-identical
Networks for the Separation of Sources that are Superimposed and Delayed
inputs . Each output connects to one input, and a delayed version of each of the
other outputs, for a total of 28(S - 1) adaptive coefficients.
4
NETWORK FOR UNMIXING FILTERED SIGNALS
Figure 4: A network to unrnix signals that have been mixed via filtering. The
filters in the feedback path are adjusted to independently minimize the power h( Oi)
of each output.
For the mixing process that involves filtering,
Ii(t) = Pi(t)
+L
j-:ti
L
MijkPj(t - bk),
(13)
k
we put filters in the feedback path of each output:
Oi(t)
= li(t) -
L L CjkOj(t j -:ti
15k),
(14)
k
(Jutten, et. al., 1991) also independently developed this architecture. We can use the
principle of minimum output power to develop a learning rule for this architecture:
(15)
for some convex function h. (Jutten, et. al., 1991) suggests using an adaptation rule
that is equivalent to choosing h(x) = X4.
Interestingly, neither the choice of h( x) = x 2 nor h( x) = X4 converges to the correct
solution. For both h(x) = x 2 and h(x) = x4, if the coefficients start at the correct
solution, they stay there. However, if the coefficients start at zero, they converge
to a solution that is only roughly correct (see Figure 5). These experiments show
735
736
Platt and Faggin
... = Absolute Value
o =SquSIe
= Fourth Power
o
,.....;
90~~1---+2--~3---4+-~5---+6--~7---+8--~9
coefficient number
Figure 5: The coefficients for one filter in the feedback path of the network. The
weights were initialized t.o zero. Two different speech/music mixtures were applied
to the network. The solid line indicates the correct solution for the coefficients.
When minimizing either h(x)
x 2 or h(x)
x\ the network converges to an
incorrect solution. Minimizing h(x) Ixl seems to work well .
=
=
=
that the learning algorithm has multiple stable states. Experimentally, the spurious
stable states seem to perform roughly as well as the true answer.
To account for these multiple stable states, we came up with a conjecture: that
the different minimizations performed by each output fought against one another
and created the multiple stable states. Optimization theory suggests using an exact
penalty method to avoid fighting between multiple terms in a single optimization
criteria (Gill, 1981). The exact penalty method minimizes a function h(x) that has
a non-zero derivative for x close to O. We tried a simple exact penalty method of
h(x) = Ix\' and it empirically converged to the correct solution (see Figure 5). The
adaptation rule is then
(16)
In this case, the non-linearity of the adaptation rule seems to be important for the
network to converge to the true answer. For a speech/music mixture, we achieved
a signal-to-noise ratio of 20 dB using the update rule (16).
5
FUTURE WORK
The networks described in the last two sections were found to converge empirically.
In the future, proving conditions for convergence would be useful. There are some
known pathological cases which cause these networks not to converge. For example,
using white noise as the pure signals for the network in section 3 causes it to fail,
because there is no sensible way for the network to change the delays.
Networks for the Separation of Sources that are Superimposed and Delayed
More exploration of the choice of optimization function needs to be performed in
the future. The work in section 4 is just a first step which illustrates the possible
usefulness of the absolute value function.
Another avenue of future work is to try to express the blind separation problem as a
global optimization problem, perhaps by trying to minimize the mutual information
between the outputs. (Feinstein, Becker, personal communication)
6
CONCLUSIONS
We have found that the minimum output power principle can generate a subset of
the Herault-Jutten network learning rules. We use this principle to adapt extensions
of the Herault-Jutten network, which have delays in the feedback path. These new
networks unmix signals which have been mixed with single delays or via filtering.
Acknowledgements
We would like to thank Kannan Parthasarathy for his assistance in some of the
experiments. We would also like to thank David Feinstein, Sue Becker, and David
Mackay for useful discussions.
References
Cardoso, J. F., (1989) "Blind Identification of Independent Components," Proceedings of the Workshop 011 Higher-Order Spectral Analysis, Vail, Colorado, pp. 157160, (1989).
Cohen, M. H., Pouliquen, P.O., Andreou, A. G., (1992) "Analog VLSI Implementation of an Auto-Adaptive Network for Real-Time Separation of Independent Signals," Advances in Neural Information Processing Systems 4, Morgan-Kaufmann,
San Mateo, CA.
Gill, P. E., Murray, W., Wright, M. H., (1981) Practical Optimization, Academic
Press, London.
Herault, J., J utten, C., (1986) "Space or Time Adaptive Signal Processing by Neural
Network Models," Neural Networks for Computing, AlP Conference Proceedings
151, pp. 207-211, Snowbird, Utah.
Jutten, C., Thi, L. N., Dijkstra, E., Vittoz, E., Caelen, J., (1991) "Blind Separation
of Sources: an Algorithm for Separation of Convolutive Mixtures," Proc. Inti. Workshop on High Order Statistics, Chamrousse France, July 1991.
Jutten, C., Herault, J., (1991) "Blind Separation of Sources, part I: An Adaptive
Algorithm Based on Neuromimetic Architecture," Signal Processing, vol. 24, pp. 110.
Sorouchyari, E., (1991) "Blind Separation of Sources, Part III: Stability analysis,"
Signal Processing, vol. 24, pp. 21-29.
Vittoz, E. A., Arreguit, X., (1989) "CMOS Integration of Herault-Jutten Cells
for Separation of Sources," Proc. Workshop on Analog VLSI and Neural Systems,
Portland, Oregon, May 1989.
Widrow, B., Glover, J., McCool, J., Kaunitz, J., Williams, C., Hearn, R., Zeidler, J., Dong, E., Goodlin, R., (1975) "Adaptive Noise Cancelling: Principles and
Applications," Proc. IEEE, vol. 63, no. 12, pp. 1692-1716.
737
PART XI
IMPLEMENTATION
| 490 |@word version:1 seems:2 tried:1 solid:1 interestingly:1 hearn:1 must:1 john:1 additive:1 realistic:1 update:2 ith:1 short:1 filtered:3 mathematical:1 glover:1 incorrect:1 introduce:1 roughly:2 nor:1 linearity:1 lowest:1 minimizes:3 developed:1 suite:1 ti:2 platt:4 understood:1 path:7 approximately:1 plus:2 mateo:1 suggests:2 challenging:1 practical:1 thi:1 road:1 cannot:1 close:1 put:1 equivalent:1 williams:1 independently:4 convex:2 pure:14 fade:2 rule:12 his:1 proving:1 handle:2 notion:1 stability:1 imagine:1 colorado:1 exact:3 us:2 zeidler:1 personal:1 purely:1 describe:1 london:1 choosing:1 solve:1 federico:1 statistic:1 took:1 propose:3 adaptation:10 cancelling:2 date:1 mixing:6 goodlin:1 convergence:1 produce:2 unmixing:3 cmos:1 converges:3 widrow:4 develop:1 snowbird:1 ij:2 solves:1 implemented:1 involves:1 vittoz:4 correct:6 filter:4 exploration:1 alp:1 generalization:1 adjusted:2 extension:3 wright:1 proc:3 superposition:5 minimization:2 rather:1 avoid:1 portland:1 superimposed:7 indicates:1 lj:1 spurious:1 vlsi:3 france:1 herault:26 integration:1 mackay:1 mutual:1 identical:2 x4:3 cancel:1 future:4 zanker:1 mjn:1 pathological:1 delayed:11 phase:1 connects:1 interest:2 adjust:1 mixture:4 capable:1 initialized:1 causal:1 isolated:1 introducing:1 subset:3 aoi:1 usefulness:1 delay:15 supervisor:1 dij:3 answer:3 faggin:4 corrupted:1 stay:1 off:1 dong:1 together:1 quickly:3 concrete:1 derivative:2 unmix:5 li:2 account:1 sec:1 utten:3 inc:1 coefficient:8 oregon:1 blind:13 performed:3 try:5 break:1 start:2 minimize:2 oi:4 kaufmann:1 yield:3 generalize:1 identification:1 researcher:1 drive:1 converged:1 against:1 frequency:1 pp:5 degeneracy:2 cj:1 back:1 higher:2 just:2 jutten:27 perhaps:1 utah:1 true:3 hence:1 white:1 assistance:1 djj:4 criterion:1 trying:1 vail:1 image:1 isolates:1 recently:2 ji:1 empirically:2 cohen:2 analog:3 mathematics:1 cancellation:6 arreguit:3 access:2 stable:4 synaptics:1 add:1 certain:1 came:1 morgan:1 minimum:8 gill:2 mccool:1 determine:1 converge:6 july:1 dashed:2 signal:41 ii:6 sound:5 mix:1 impure:1 multiple:4 adapt:3 academic:1 sue:1 represent:2 achieved:1 cell:1 source:16 elegant:2 db:2 seem:1 iii:1 architecture:5 avenue:1 attenuated:3 becker:2 penalty:3 speech:15 cause:2 useful:2 governs:1 cardoso:2 amount:3 simplest:1 generate:1 notice:1 track:1 vol:3 express:2 prevent:1 pj:6 clean:1 neither:1 jose:1 you:1 fourth:1 separation:21 ob:1 quadratic:1 adapted:1 software:1 performing:1 conjecture:1 tv:1 intuitively:1 interference:10 inti:1 equation:3 pouliquen:1 fail:1 needed:1 feinstein:2 neuromimetic:1 apply:3 away:2 spectral:1 music:10 prof:1 murray:1 already:1 gradient:1 separate:1 thank:2 sensible:1 reason:1 kannan:1 assuming:1 fighting:1 modeled:1 gji:2 ratio:1 minimizing:2 difficult:1 unfortunately:1 attenuates:1 implementation:2 adjustable:2 perform:3 upper:1 descent:1 dijkstra:1 situation:1 extended:1 communication:1 bk:1 inverting:1 david:2 connection:2 andreou:1 convolutive:1 oj:4 power:17 created:2 auto:1 parthasarathy:1 understanding:1 acknowledgement:1 law:1 mixed:8 interesting:1 filtering:6 ixl:1 principle:11 pi:8 last:1 taking:1 absolute:2 feedback:8 curve:5 avoids:1 author:2 made:1 adaptive:7 san:2 subtracts:1 global:1 assumed:1 xi:1 un:1 ca:2 symmetry:1 kaunitz:1 arrow:3 noise:13 nothing:1 inverts:1 ix:1 workshop:3 illustrates:1 gap:1 mjj:1 mij:2 relies:1 room:1 experimentally:1 change:1 specifically:1 justify:1 microphone:4 called:1 gij:6 total:1 ijk:1 fulfills:1 |
4,310 | 4,900 | Non-strongly-convex smooth stochastic
approximation with convergence rate O(1/n)
Eric Moulines
LTCI
Telecom ParisTech, Paris, France
[email protected]
Francis Bach
INRIA - Sierra Project-team
Ecole Normale Sup?erieure, Paris, France
[email protected]
Abstract
We consider the stochastic approximation problem where a convex function has
to be minimized, given only the knowledge of unbiased estimates of its gradients
at certain points, a framework which includes machine learning methods based
on the minimization of the empirical risk. We focus on problems without strong
convexity, for which all previously
known algorithms achieve a convergence rate
?
for function values of O(1/ n) after n iterations. We consider and analyze two
algorithms that achieve a rate of O(1/n) for classical supervised learning problems. For least-squares regression, we show that averaged stochastic gradient
descent with constant step-size achieves the desired rate. For logistic regression,
this is achieved by a simple novel stochastic gradient algorithm that (a) constructs
successive local quadratic approximations of the loss functions, while (b) preserving the same running-time complexity as stochastic gradient descent. For these
algorithms, we provide a non-asymptotic analysis of the generalization error (in
expectation, and also in high probability for least-squares), and run extensive experiments showing that they often outperform existing approaches.
1 Introduction
Large-scale machine learning problems are becoming ubiquitous in many areas of science and engineering. Faced with large amounts of data, practitioners typically prefer algorithms that process
each observation only once, or a few times. Stochastic approximation algorithms such as stochastic
gradient descent (SGD) and its variants, although introduced more than sixty years ago [1], still
remain the most widely used and studied method in this context (see, e.g., [2, 3, 4, 5, 6, 7]).
We
convex functions f , defined on a Euclidean space F , given by f (?) =
consider minimizing
E ?(y, h?, xi) , where (x, y) ? F ? R denotes the data and ? denotes a loss function that is convex with respect to the second variable. This includes logistic and least-squares regression. In
the stochastic approximation framework, independent and identically distributed pairs (xn , yn ) are
observed sequentially and the predictor defined by ? is updated after each pair is seen.
We partially understand the properties of f that affect the problem difficulty. Strong convexity (i.e.,
when f is twice differentiable, a uniform strictly positive lower-bound ? on Hessians of f ) is a key
property. Indeed, after n observations and with the proper step-sizes, averaged SGD
? achieves the
rate of O(1/?n) in the strongly-convex case [5, 4], while it achieves only O(1/ n) in the nonstrongly-convex case [5], with matching lower-bounds [8].
The main issue with strong convexity is that typical machine learning problems are high-dimensional
and have correlated variables so that
? the strong convexity constant ? is zero or very close to zero,
and in any case smaller than O(1/ n). This then makes the non-strongly convex methods better.
In this paper, we aim at obtaining algorithms that may deal with arbitrarily small strong-convexity
constants, but still achieve a rate of O(1/n).
1
Smoothness plays a central role in the context of deterministic optimization. The known convergence
rates for smooth optimization are better than for non-smooth optimization (e.g., see [9]). However,
for stochastic optimization the use of smoothness only?leads to improvements on constants (e.g.,
see [10]) but not on the rate itself, which remains O(1/ n) for non-strongly-convex problems.
We show that for the square loss and for the logistic loss, we may use the smoothness of the loss and
obtain algorithms that have a convergence rate of O(1/n) without any strong convexity assumptions.
More precisely, for least-squares regression, we show in Section 2 that averaged stochastic gradient
descent with constant step-size achieves the desired rate. For logistic regression this is achieved by
a novel stochastic gradient algorithm that (a) constructs successive local quadratic approximations
of the loss functions, while (b) preserving the same running-time complexity as stochastic gradient descent (see Section 3). For these algorithms, we provide a non-asymptotic analysis of their
generalization error (in expectation, and also in high probability for least-squares), and run extensive experiments on standard machine learning benchmarks showing in Section 4 that they often
outperform existing approaches.
2 Constant-step-size least-mean-square algorithm
In this section, we consider stochastic approximation for least-squares regression, where SGD is
often referred to as the least-mean-square (LMS) algorithm. The novelty of our convergence result
is the use of the constant step-size with averaging, which was already considered by [11], but now
with an explicit non-asymptotic rate O(1/n) without any dependence on the lowest eigenvalue of
the covariance matrix.
2.1 Convergence in expectation
We make the following assumptions:
(A1) F is a d-dimensional Euclidean space, with d > 1.
(A2) The observations (xn , zn ) ? F ? F are independent and identically distributed.
(A3) Ekxn k2 and Ekzn k2 are finite. Denote by H = E(xn ? xn ) the covariance operator from
F to F . Without loss of generality, H is assumed invertible (by projecting onto the minimal
subspace where xn lies almost surely). However, its eigenvalues may be arbitrarily small.
(A4) The global minimum of f (?) = (1/2)E h?, xn i2 ? 2h?, zn i is attained at a certain ?? ? F .
We denote by ?n = zn ? h?? , xn ixn the residual. We have E ?n = 0, but in general, it is not
true that E ?n xn = 0 (unless the model is well-specified).
(A5) We study the stochastic gradient (a.k.a. least mean square) recursion defined as
?n = ?n?1 ? ?(h?n?1 , xn ixn ? zn ) = (I ? ?xn ? xn )?n?1 + ?zn ,
(1)
P
started from ?0 ? F . We also consider the averaged iterates ??n = (n + 1)?1 nk=0 ?k .
(A6) There exists R > 0 and ? > 0 such that E ?n ? ?n 4 ? 2 H and E kxn k2 xn ? xn 4 R2 H,
where 4 denotes the the order between self-adjoint operators, i.e., A 4 B if and only if B ? A
is positive semi-definite.
Discussion of assumptions. Assumptions (A1-5) are standard in stochastic approximation (see,
e.g., [12, 6]). Note that for least-squares problems, zn is of the form yn xn , where yn ? R is
the response to be predicted as a linear function of xn . We consider a slightly more general case
than least-squares because we will need it for the quadratic approximation of the logistic loss in
Section 3.1. Note that in assumption (A4), we do not assume that the model is well-specified.
Assumption (A6) is true for least-square regression
bounded data, since, if
with almost surely
kxn k2 6 R2 almost surely, then E kxn k2 xn ? xn 4 E R2 xn ? xn = R2 H; a similar inequality
holds for the output variables yn . Moreover, it also holds for data with infinite supports, such as
Gaussians or mixtures of Gaussians (where all covariance matrices of the mixture components are
lower and upper bounded by a constant times the same matrix). Note that the finite-dimensionality
assumption could be relaxed, but this would require notions similar to degrees of freedom [13],
which is outside of the scope of this paper.
The goal of this section is to provide a non-asymptotic bound on the expectation E f (??n ) ? f (?? ) ,
that (a) does not depend on the smallest non-zero eigenvalue of H (which could be arbitrarily small)
and (b) still scales as O(1/n).
2
Theorem 1 Assume (A1-6). For any constant step-size ? < 1/R2 , we have
?
2
1
? d
1
?
p
.
+ Rk?0 ? ?? k p
E f (?n?1 ) ? f (?? ) 6
2n 1 ? ?R2
?R2
h ?
i2
When ? = 1/(4R2 ), we obtain E f (??n?1 ) ? f (?? ) 6 n2 ? d + Rk?0 ? ?? k .
(2)
Proof technique. We adapt and extend a proof technique from [14] which is based on nonasymptotic expansions in powers of ?. We also use a result from [2] which studied the recursion in
Eq. (1), with xn ? xn replaced by its expectation H. See [15] for details.
Optimality of bounds. Our bound in Eq. (2) leads to a rate of O(1/n), which is known to be optimal
for least-squares regression (i.e., under reasonable assumptions, no algorithm, even more complex
than averaged SGD can have a better dependence in n) [16]. The term ? 2 d/n is also unimprovable.
Initial conditions. If ? is small, then the initial condition is forgotten more slowly. Note that with
additional strong convexity assumptions, the initial condition would be forgotten faster (exponentially fast without averaging), which is one of the traditional uses of constant-step-size LMS [17].
Specificity of constant step-sizes. The non-averaged iterate sequence (?n ) is a homogeneous
Markov chain; under appropriate technical conditions, this Markov chain has a unique stationary
(invariant) distribution and the sequence of iterates (?n ) converges in distribution to this invariant distribution; see [18, Chapter 17]. Denote by ?? the invariant distribution. Assuming that
the Markov Chain is Harris-recurrent, the ergodic theorem for Harris Markov chain shows that
Pn?1
def R
??? (d?), which is the mean of the
??n?1 = n?1 k=0 ?k converges almost-surely to ??? =
stationary distribution. Taking the expectation on both side of Eq. (1), we get E[?n ] ? ?? =
(I ? ?H)(E[?n?1 ] ? ?? ), which shows, using that limn?? E[?n ] = ??? that H ??? = H?? and
therefore ??? = ?? since H is invertible. Under slightly stronger assumptions, it can be shown that
P
limn?? nE[(??n ? ?? )2 ] = Var? (?0 ) + 2 ? Cov? (?0 , ?k ) ,
?
k=1
?
where Cov?? (?0 , ?k ) denotes the covariance of ?0 and ?k when the Markov chain is started from
stationarity. This implies that limn?? nE[f (??n ) ? f (?? )] has a finite limit. Therefore, this interpretation explains why the averaging produces a sequence of estimators which converges to the
solution ?? pointwise, and that the rate of convergence of E[f (?n )?f (?? )] is of order O(1/n). Note
that (a) our result is stronger since it is independent of the lowest eigenvalue of H, and (b) for other
losses than quadratic, the same properties hold except that the mean under the stationary distribution
does not coincide with ?? and its distance to ?? is typically of order ? 2 (see Section 3).
2.2 Convergence in higher orders
We are now going to consider an extra assumption in order to bound the p-th moment of the excess
risk and then get a high-probability bound. Let p be a real number greater than 1.
(A7) There exists R > 0, ? > 0 and ? > ? > 0 such that, for all n > 1, kxn k2 6 R2 a.s., and
Ek?n kp 6 ? p Rp and E ?n ? ?n 4 ? 2 H,
(3)
2
(4)
?z ? F , Ehz, xn i4 6 ? Ehz, xn i2 = ?hz, Hzi2 .
The last condition in Eq. (4) says that the kurtosis of the projection of the covariates xn on any
direction z ? F is bounded. Note that computing the constant ? happens to be equivalent to the
optimization problem solved by the FastICA algorithm [19], which thus provides an estimate of ?. In
Table 1, we provide such an estimate for the non-sparse datasets which we have used in experiments,
while we consider only directions z along the axes for high-dimensional sparse datasets. For these
datasets where a given variable is equal to zero except for a few observations, ? is typically quite
large. Adapting and analyzing normalized LMS techniques [20] to this set-up is likely to improve
the theoretical robustness of the algorithm (but note that results in expectation from Theorem 1 do
not use ?). The next theorem provides a bound for the p-th moment of the excess risk.
Theorem 2 Assume (A1-7). For any real p > 1, and for a step-size ? 6 1/(12p?R2), we have:
r
2
?
p 1/p
p
2
?
E f (?n?1 ) ? f (?? )
6
7? d + Rk?0 ? ?? k 3 +
.
(5)
2n
?pR2
3
p 1/p
For ? = 1/(12p?R2), we get: Ef (??n?1 ) ? f (?? )
6
p
2n
?
2
?
7? d + 6 ?Rk?0 ? ?? k .
Note that to control the p-th order moment, a smaller step-size is needed, which scales as 1/p. We
2
can now provide a high-probability bound; the tails decay polynomially as 1/(n? 12??R ) and the
smaller the step-size ?, the lighter the tails.
Corollary 1 For any step-size such that ? 6 1/(12?R2), any ? ? (0, 1),
?
?
2
?
7? d + Rk?0 ? ?? k( 3 + 24?)
1
?
P f (?n?1 ) ? f (?? ) > 12??R2
6?.
24??R2
n?
(6)
3 Beyond least-squares: M-estimation
In Section 2, we have shown that for least-squares regression, averaged SGD achieves a convergence
rate of O(1/n) with no assumption regarding strong convexity. For all losses, with a constant stepsize ?, the stationary
distribution ?? corresponding to the homogeneous Markov chain (?n ) does
R
always satisfy f ? (?)?? (d?) = 0, where f is Rthe generalization error. When the gradient f ? is linear
?
(i.e., f is quadratic),
R then this implies that f ( ??? (d?)) = 0, i.e., the averaged recursion ?converges
?
pathwise to ?? = ??? (d?) which coincides withR the optimal value ?? (defined
through f (?? ) = 0).
R
When the gradient f ? is no longer linear, then f ? (?)?? (d?) 6= f ? ( ??? (d?)). Therefore, for
general M -estimation problems we should expect that the averaged sequence still converges at rate
O(1/n) to the mean of the stationary distribution ??? , but not to the optimal predictor ?? . Typically,
the average distance between ?n and ?? is of order ? (see Section 4 and [21]), while for the averaged
iterates that converge pointwise to ??? , it is of order ? 2 for strongly convex problems under some
additional smoothness conditions on the loss functions (these are satisfied, for example, by the
logistic loss [22]).
Since quadratic functions may be optimized with rate O(1/n) under weak conditions, we are going
to use a quadratic approximation around a well chosen support point, which shares some similarity
with the Newton procedure (however, with a non trivial adaptation to the stochastic approximation
framework). The Newton step for f around a certain point ?? is equivalent to minimizing a quadratic
? i.e., g(?) = f (?)
? + hf ? (?),
? ? ? ?i
? + 1 h? ? ?,
? f ?? (?)(?
? ? ?)i.
? If fn (?) def
surrogate g of f around ?,
=
2
1
? ?
? f ?? (?)(??
?
? the
?
?
h??
?,
?)i;
?(yn , h?, xn i), then g(?) = Egn (?), with gn (?) = f (?)+hf
(
?),
??
?i+
n
n
2
Newton step may thus be solved approximately with stochastic approximation (here constant-step
size LMS), with the following recursion:
? + f ?? (?)(?
? n?1 ? ?)
? .
?n = ?n?1 ? ?g ? (?n?1 ) = ?n?1 ? ? f ? (?)
(7)
n
n
n
? A
This is equivalent to replacing the gradient
by its first-order approximation around ?.
crucial point is that for machine learning scenarios where fn is a loss associated to a single data
point, its complexity is only twice the complexity of a regular stochastic approximation step, since,
with fn (?) = ?(yn , hxn , ?i), fn?? (?) is a rank-one matrix.
fn? (?n?1 )
Choice of support points for quadratic approximation. An important aspect is the choice of the
? In this paper, we consider two strategies:
support point ?.
?
? Two-step procedure: for convex losses, averaged
? SGD with a step-size decaying at O(1/ n)
achieves a rate (up to logarithmic terms) of O(1/ n) [5, 6]. We may thus use it to obtain a first
decent estimate. The two-stage procedure is as follows (and uses 2n observations): n steps of
?
? and then averaged LMS for the
averaged SGD with constant step size ? ? 1/ n to obtain ?,
?
Newton step around ?. As shown below, this algorithm achieves the rate O(1/n) for logistic
regression. However, it is not the most efficient in practice.
? Support point = current average iterate: we simply consider the current averaged iterate ??n?1
? leading to the recursion:
as the support point ?,
?n = ?n?1 ? ? fn? (??n?1 ) + fn?? (??n?1 )(?n?1 ? ??n?1 ) .
(8)
Although this algorithm has shown to be the most efficient in practice (see Section 4) we currently have no proof of convergence. Given that the behavior of the algorithms does not change
much when the support point is updated less frequently than each iteration, there may be some
connections to two-time-scale algorithms (see, e.g., [23]). In Section 4, we also consider several
other strategies based on doubling tricks.
4
Interestingly, for non-quadratic functions, our algorithm imposes a new bias (by replacing the true
gradient by an approximation which is only valid close to ??n?1 ) in order to reach faster convergence
(due to the linearity of the underlying gradients).
Relationship with one-step-estimators. One-step estimators (see, e.g., [24]) typically take any
estimator with O(1/n)-convergence rate, and make a full Newton step to obtain an efficient estimator (i.e., one that achieves the Cramer-Rao lower bound). Although our novel algorithm is largely
inspired by one-step estimators,
our situation is slightly different since our first estimator has only
?
convergence rate O(1/ n) and is estimated on different observations.
3.1 Self-concordance and logistic regression
We make the following assumptions:
(B1) F is a d-dimensional Euclidean space, with d > 1.
(B2) The observations (xn , yn ) ? F ? {?1, 1} are independent and identically distributed.
(B3) We consider f (?) = E ?(yn , hxn , ?i) , with the following assumption on the loss function ?
(whenever we take derivatives of ?, this will be with respect to the second variable):
?(y, y?) ? {?1, 1} ? R, ?? (y, y?) 6 1, ??? (y, y?) 6 1/4, |???? (y, y?)| 6 ??? (y, y?).
We denote by ?? a global minimizer of f , which we thus assume to exist, and we denote by
H = f ?? (?? ) the Hessian operator at a global optimum ?? .
(B4) We assume that there exists R > 0, ? > 0 and ? > 0 such that kxn k2 6 R2 almost surely, and
E xn ? xn 4 ?E ??? (yn , h?? , xn i)xn ? xn = ?H,
(9)
??
2
(10)
?z ? F , ? ? F , E ? (yn , h?, xn i)2 hz, xn i4 6 ? E ??? (yn , h?, xn i)hz, xn i2 .
Assumption (B3) is satisfied for the logistic loss and extends to all generalized linear models (see
more details in [22]), and the relationship between the third derivative and second derivative of the
loss ? is often referred to as self-concordance (see [9, 25] and references therein). Note moreover
that we must have ? > 4 and ? > 1.
A loose upper bound for ? is 1/ inf n ??? (yn , h?? , xn i) but in practice, it is typically much smaller
(see Table 1). The condition in Eq. (10) is hard to check because it is uniform in ?. With a slightly
more complex proof, we could restrict ? to be close to ?? ; with such constraints, the value of ? we
have found is close to the one from Section 2.2 (i.e., without the terms in ??? (yn , h?, xn i)).
Theorem 3 Assume (B1-4), and consider the vector ?n obtained as follows: (a) perform n steps of
?
averaged stochastic gradient descent with constant step size 1/2R2 n, to get ??n , and (b) perform n
step of averaged LMS with constant step-size 1/R2 for the quadratic approximation of f around ??n .
If n > (19 + 9Rk?0 ? ?? k)4 , then
?3/2 ?3 d
(16Rk?0 ? ?? k + 19)4 .
(11)
Ef (?n ) ? f (?? ) 6
n
We get an O(1/n) convergence rate without assuming strong convexity, even locally, thus improving
on results from [22] where the the rate is proportional to 1/(n?min(H)). The proof relies on selfconcordance properties and the sharp analysis of the Newton step (see [15] for details).
4 Experiments
4.1 Synthetic data
Least-mean-square algorithm. We consider normally distributed inputs, with covariance matrix H
that has random eigenvectors and eigenvalues 1/k, k = 1, . . . , d. The outputs are generated from a
linear function with homoscedastic noise with unit signal to noise-ratio. We consider d = 20 and
the?least-mean-square algorithm with several settings of the step size ?n , constant or proportional to
1/ n. Here R2 denotes the average radius of the data, i.e., R2 = tr H. In the left plot of Figure 1,
we show the results, averaged over 10 replications.
Without averaging, the algorithm with constant step-size does not converge pointwise (it oscillates),
and its average excess risk decays as a linear function of ? (indeed, the gap between each values of
the constant step-size is close to log10 (4), which corresponds to a linear function in ?).
5
0
synthetic square
synthetic logistic ? 1
synthetic logistic ? 2
?2
2
?3
?4
?5
0
1/2R
2
1/8R
2
1/32R
2 1/2
1/2R n
2
4
log (n)
6
0
log10[f(?)?f(?*)]
?1
log10[f(?)?f(?*)]
log10[f(?)?f(?*)]
0
?1
?2
2
?3
?4
?5
0
1/2R
2
1/8R
2
1/32R
2 1/2
1/2R n
2
4
log (n)
10
10
6
?1
?2
?3
?4
?5
0
p
every 2
every iter.
2?step
2?step?dbl.
2
4
log (n)
6
10
Figure 1: Synthetic data. Left: least-squares regression. Middle: logistic regression with averaged
SGD with various step-sizes, averaged (plain) and non-averaged (dashed). Right: various Newtonbased schemes for the same logistic regression problem. Best seen in color; see text for details.
With averaging, the algorithm with constant step-size does converge at rate O(1/n), and for all
values of the constant ?, the rate is actually the same. Moreover (although it is not shown in the
plots), the standard deviation is much lower.
?
With ?
decaying step-size ?n = 1/(2R2 n) and without averaging, the convergence rate is
O(1/ n), and improves to O(1/n) with averaging.
Logistic regression. We consider the same input data as for least-squares, but now generates outputs
from the logistic probabilistic model. We compare several algorithms and display the results in
Figure 1 (middle and right plots).
On the middle plot, we consider SGD; without averaging, the algorithm with constant step-size does
not converge and its average excess risk reaches a constant value which is a linear function of ?
(indeed, the gap between each values of the constant step-size is close to log10 (4)). With averaging,
the algorithm does converge, but as opposed to least-squares, to a point which is not the optimal
solution, with an error proportional to ? 2 (the gap between curves is twice as large).
On the right plot, we consider various variations of our online Newton-approximation scheme. The
?2-step? algorithm is the one for which our convergence rate holds (n being the total number of
examples, we perform n/2 steps of averaged SGD, then n/2 steps of LMS). Not surprisingly, it is
not the best in practice (in particular at n/2, when starting the constant-size LMS, the performance
worsens temporarily). It is classical to use doubling tricks to remedy this problem while preserving
convergence rates [26], this is done in ?2-step-dbl.?, which avoids the previous erratic behavior.
We have also considered getting rid of the first stage where plain averaged stochastic gradient is
used to obtain a support point for the quadratic approximation. We now consider only Newton-steps
but change only these support points. We consider updating the support point at every iteration, i.e.,
the recursion from Eq. (8), while we also consider updating it every dyadic point (?dbl.-approx?).
The last two algorithms perform very similarly and achieve the O(1/n) early. In all experiments on
real data, we have considered the simplest variant (which corresponds to Eq. (8)).
4.2 Standard benchmarks
We have considered 6 benchmark datasets which are often used in comparing large-scale optimization methods. The datasets are described in Table 1 and vary in values of d, n and sparsity levels.
These are all finite binary classification datasets with outputs in {?1, 1}. For least-squares and logistic regression, we have followed the following experimental protocol: (1) remove all outliers (i.e.,
sample points xn whose norm is greater than 5 times the average norm), (2) divide the dataset in two
equal parts, one for training, one for testing, (3) sample within the training dataset with replacement,
for 100 times the number of observations in the training set (this corresponds to 100 effective passes;
in all plots, a black dashed line marks the first effective pass), (4) compute averaged costs on training
and testing data (based on 10 replications). All the costs are shown in log-scale, normalized to that
the first iteration leads to f (?0 ) ? f (?? ) = 1.
All algorithms that we consider (ours and others) have a step-size, and typically a theoretical value
that ensures convergence. We consider two settings: (1) one when this theoretical value is used, (2)
one with the best testing error after one effective pass through the data (testing powers of 4 times the
theoretical step-size).
6
Here, we only consider covertype, alpha, sido and news, as well as test errors. For all training errors
and the two other datasets (quantum, rcv1), see [15].
Least-squares regression. We compare three algorithms:
averaged SGD with constant step-size,
?
averaged SGD with step-size decaying as C/R2 n, and the stochastic averaged gradient (SAG)
method which is dedicated to finite training data sets [27], which has shown state-of-the-art performance in this set-up. We show the results in the two left plots of Figure 2 and Figure 3.
?
Averaged SGD with decaying step-size equal to C/R2 n is slowest (except for sido). In particular, when the best constant C is used (right columns), the performance typically starts to increase
significantly. With that step size, even after 100 passes, there is no sign of overfitting, even for the
high-dimensional sparse datasets.
SAG and constant-step-size averaged SGD exhibit the best behavior, for the theoretical step-sizes
and the best constants, with a significant advantage for constant-step-size SGD. The non-sparse
datasets do not lead to overfitting, even close to the global optimum of the (unregularized) training
objectives, while the sparse datasets do exhibit some overfitting after more than 10 passes.
Logistic regression. We also compare two additional algorithms: our Newton-based technique and
1
?Adagrad? [7], which is a stochastic gradient method with a form a diagonal
? scaling that allows to
reduce the convergence rate (which is still in theory proportional to O(1/ n)). We show results in
the two right plots of Figure 2 and Figure 3.
?
Averaged SGD with decaying step-size proportional to 1/R2 n has the same behavior than for
least-squares (step-size harder to tune, always inferior performance except for sido).
SAG, constant-step-size SGD and the novel Newton technique tend to behave similarly (good with
theoretical step-size, always among the best methods). They differ notably in some aspects: (1)
SAG converges quicker for the training errors (shown in [15]) while it is a bit slower for the testing
error, (2) in some instances, constant-step-size averaged SGD does underfit (covertype, alpha, news),
which is consistent with the lack of convergence to the global optimum mentioned earlier, (3) the
novel online Newton algorithm is consistently better.
On the non-sparse datasets, Adagrad performs similarly to the Newton-type method (often better in
early iterations and worse later), except for the alpha dataset where the step-size is harder to tune
(the best step-size tends to have early iterations that make the cost go up significantly). On sparse
datasets like rcv1, the performance is essentially the same as Newton. On the sido data set, Adagrad
(with fixed steps size, left column) achieves a good testing loss quickly then levels off, for reasons
we cannot explain. On the news dataset, it is inferior without parameter-tuning and a bit better with.
Adagrad uses a diagonal rescaling; it could be combined with our technique, early experiments show
that it improves results but that it is more sensitive to the choice of step-size.
Overall, even with d and ? very large (where our bounds are vacuous), the performance of our
algorithm still achieves the state of the art, while being more robust to the selection of the step-size:
finer quantities likes degrees of freedom [13] should be able to quantify more accurately the quality
of the new algorithms.
5 Conclusion
In this paper, we have presented two stochastic approximation algorithms that can achieve rates
of O(1/n) for logistic and least-squares regression, without strong-convexity assumptions. Our
analysis reinforces the key role of averaging in obtaining fast rates, in particular with large stepsizes. Our work can naturally be extended in several ways: (a) an analysis of the algorithm that
updates the support point of the quadratic approximation at every iteration, (b) proximal extensions
(easy to implement, but potentially harder to analyze); (c) adaptive ways to find the constant-stepsize; (d) step-sizes that depend on the iterates to increase robustness, like in normalized LMS [20],
and (e) non-parametric analysis to improve our theoretical results for large values of d.
Acknowledgements. Francis Bach was partially supported by the European Research Council
(SIERRA Project). We thank Aymeric Dieuleveut and Nicolas Flammarion for helpful discussions.
1
Since a bound on k?? k is not available, we have used step-sizes proportional to 1/ supn kxn k? .
7
Table 1: Datasets used in our experiments. We report the proportion of non-zero entries, as well
as estimates for the constant ? and ? used in our theoretical results, together with the non-sharp
constant which is typically used in analysis of logistic regression and which our analysis avoids
(these are computed for non-sparse datasets only).
d
79
55
501
4 933
47 237
1 355 192
n
50 000
581 012
500 000
12 678
20 242
19 996
covertype square C=opt test
0
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
?2.5
?3
0
1/R2
1/R2n1/2
SAG
2
?2.5
?3
4
log10(n)
6
0
log10[f(?)?f(?*)]
alpha square C=1 test
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?2
0
1/R2
1/R2n1/2
SAG
2
?1.5
?2
4
log10(n)
6
0
C/R2
C/R2n1/2
SAG
2
4
log10(n)
6
0
2
?1
?1
2
?2
0.5
?3
4
log10(n)
6
alpha logistic C=1 test
0
0.5
0
0
?0.5
?1
?1.5
?2.5
0
6
2
C/R
2 1/2
C/R n
SAG
Adagrad
Newton
?2
?0.5
?2
4
log10(n)
2
1/R
2 1/2
1/R n
SAG
Adagrad
Newton
0
C/R2
C/R2n1/2
SAG
covertype logistic C=opt test
0
?3
alpha square C=opt test
1
1/ inf n ??? (yn , h?? , xn i)
8.5 ?102
3 ?1012
8 ?104
?
?
?
?
16
160
18
?
?
?
covertype logistic C=1 test
log10[f(?)?f(?*)]
log10[f(?)?f(?*)]
covertype square C=1 test
?
5.8 ?102
9.6 ?102
6
1.3 ?104
2 ?104
2 ?104
sparsity
100 %
100 %
100 %
10 %
0.2 %
0.03 %
log10[f(?)?f(?*)]
Name
quantum
covertype
alpha
sido
rcv1
news
?1
1/R2
1/R2n1/2
SAG
Adagrad
Newton
2
?1.5
?2
4
log10(n)
6
?2.5
0
2
4
log10(n)
6
alpha logistic C=opt test
C/R2
C/R2n1/2
SAG
Adagrad
Newton
2
4
log10(n)
6
Figure 2: Test performance for least-square regression (two left plots) and logistic regression (two
right plots). From top to bottom: covertype, alpha. Left: theoretical steps, right: steps optimized for
performance after one effective pass through the data. Best seen in color.
1/R2
1/R2n1/2
SAG
0
?0.5
?0.5
?1
?1
0
2
4
log10(n)
0
log10[f(?)?f(?*)]
news square C=1 test
sido logistic C=1 test
C/R2
C/R2n1/2
SAG
2
4
log10(n)
sido logistic C=opt test
?0.5
1/R2
0
1/R2n1/2
SAG
Adagrad
Newton ?0.5
?1
?1
0
0
2
4
log10(n)
0
news logistic C=1 test
news square C=opt test
0.2
0.2
0.2
0
0
0
0
?0.2
?0.2
?0.4
?0.6
?0.8
0
?0.4
1/R2
1/R2n1/2
SAG
2
log10(n)
?0.6
?0.8
4
0
C/R2
C/R2n1/2
SAG
2
log10(n)
?0.2
?0.4
?0.6
?0.8
?1
0
4
C/R2
C/R2n1/2
SAG
Adagrad
Newton
2
4
log10(n)
news logistic C=opt test
0.2
log10[f(?)?f(?*)]
log10[f(?)?f(?*)]
0
sido square C=opt test
log10[f(?)?f(?*)]
sido square C=1 test
?0.2
1/R2
1/R2n1/2
SAG
Adagrad
Newton
2
log10(n)
?0.4
?0.6
?0.8
4
?1
0
C/R2
C/R2n1/2
SAG
Adagrad
Newton
2
log10(n)
4
Figure 3: Test performance for least-square regression (two left plots) and logistic regression (two
right plots). From top to bottom: sido, news. Left: theoretical steps, right: steps optimized for
performance after one effective pass through the data. Best seen in color.
8
References
[1] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical
Statistics, pages 400?407, 1951.
[2] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM
Journal on Control and Optimization, 30(4):838?855, 1992.
[3] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Adv. NIPS, 2008.
[4] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In Proc. ICML, 2007.
[5] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[6] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for
machine learning. In Adv. NIPS, 2011.
[7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2010.
[8] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method efficiency in optimization.
Wiley & Sons, 1983.
[9] Y. Nesterov. Introductory lectures on convex optimization. Kluwer, 2004.
[10] G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133(1-2):365?397, 2012.
[11] L. Gy?orfi and H. Walk. On the averaged stochastic approximation for linear regression. SIAM
Journal on Control and Optimization, 34(1):31?61, 1996.
[12] H. J. Kushner and G. G. Yin. Stochastic approximation and recursive algorithms and applications. Springer-Verlag, second edition, 2003.
[13] C. Gu. Smoothing spline ANOVA models. Springer, 2002.
[14] R. Aguech, E. Moulines, and P. Priouret. On a perturbation approach for the analysis of
stochastic tracking algorithms. SIAM J. Control and Optimization, 39(3):872?899, 2000.
[15] F. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n). Technical Report 00831977, HAL, 2013.
[16] A. B. Tsybakov. Optimal rates of aggregation. In Proc. COLT, 2003.
[17] O. Macchi. Adaptive processing: The least mean squares approach with applications in transmission. Wiley West Sussex, 1995.
[18] S. Meyn and R. Tweedie. Markov Chains and Stochastic Stability. Cambridge U. P., 2009.
[19] A. Hyv?arinen and E. Oja. A fast fixed-point algorithm for independent component analysis.
Neural computation, 9(7):1483?1492, 1997.
[20] N.J. Bershad. Analysis of the normalized LMS algorithm with Gaussian inputs. IEEE Transactions on Acoustics, Speech and Signal Processing, 34(4):793?806, 1986.
[21] A. Nedic and D. Bertsekas. Convergence rate of incremental subgradient algorithms. Stochastic Optimization: Algorithms and Applications, pages 263?304, 2000.
[22] F. Bach. Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression. Technical Report 00804431-v2, HAL, 2013.
[23] V. S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters,
29(5):291?294, 1997.
[24] A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge Univ. Press, 2000.
[25] F. Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics,
4:384?414, 2010.
[26] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for
stochastic strongly-convex optimization. In Proc. COLT, 2001.
[27] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average
gradient. Technical Report 00860051, HAL, 2013.
9
| 4900 |@word worsens:1 middle:3 proportion:1 stronger:2 norm:2 hyv:1 nemirovsky:1 covariance:5 sgd:18 tr:1 harder:3 moment:3 initial:3 ecole:1 ours:1 interestingly:1 existing:2 current:2 comparing:1 must:1 fn:7 remove:1 plot:12 update:1 juditsky:2 stationary:5 iterates:4 provides:2 successive:2 mathematical:2 along:1 replication:2 introductory:1 notably:1 indeed:3 behavior:4 frequently:1 moulines:5 inspired:1 solver:1 project:2 bounded:3 moreover:3 linearity:1 underlying:1 lowest:2 forgotten:2 every:5 sag:20 oscillates:1 k2:7 control:5 normally:1 unit:1 yn:14 bertsekas:1 positive:2 engineering:1 local:3 tends:1 limit:1 analyzing:1 becoming:1 approximately:1 inria:1 black:1 twice:3 therein:1 studied:2 nemirovski:1 averaged:32 unique:1 testing:6 practice:4 recursive:1 definite:1 implement:1 regret:1 procedure:3 area:1 empirical:1 adapting:1 significantly:2 matching:1 projection:1 composite:1 orfi:1 regular:1 specificity:1 get:5 onto:1 close:7 cannot:1 operator:3 selection:1 pegasos:1 risk:5 context:2 equivalent:3 deterministic:1 go:1 kale:1 starting:1 convex:13 ergodic:1 roux:1 estimator:7 meyn:1 egn:1 stability:1 notion:1 variation:1 updated:2 annals:1 play:1 lighter:1 homogeneous:2 us:3 programming:2 trick:2 updating:2 observed:1 role:2 bottom:2 quicker:1 solved:2 ensures:1 news:9 adv:2 mentioned:1 convexity:11 complexity:5 covariates:1 nesterov:1 depend:2 eric:2 efficiency:1 gu:1 chapter:1 various:3 univ:1 fast:3 effective:5 kp:1 outside:1 shalev:1 quite:1 whose:1 widely:1 say:1 cov:2 statistic:3 vaart:1 itself:1 online:3 sequence:4 differentiable:1 eigenvalue:5 kurtosis:1 advantage:1 fr:2 adaptation:1 achieve:5 adjoint:1 getting:1 convergence:22 optimum:3 transmission:1 produce:1 incremental:1 converges:6 sierra:2 recurrent:1 eq:7 strong:11 predicted:1 implies:2 quantify:1 differ:1 direction:2 radius:1 stochastic:40 explains:1 require:1 arinen:1 generalization:3 opt:8 strictly:1 extension:1 hold:4 around:6 considered:4 cramer:1 scope:1 lm:10 achieves:10 early:4 a2:1 smallest:1 homoscedastic:1 vary:1 estimation:2 proc:3 currently:1 sensitive:1 council:1 robbins:1 minimization:2 always:3 gaussian:1 aim:1 normale:1 pn:1 stepsizes:1 corollary:1 ax:1 focus:1 improvement:1 consistently:1 rank:1 check:1 slowest:1 helpful:1 typically:9 going:2 france:2 issue:1 classification:1 among:1 overall:1 colt:2 art:2 smoothing:1 equal:3 construct:2 once:1 icml:1 minimized:1 others:1 report:4 spline:1 few:2 oja:1 replaced:1 replacement:1 ltci:1 freedom:2 stationarity:1 a5:1 unimprovable:1 mixture:2 sixty:1 primal:1 dieuleveut:1 chain:7 tweedie:1 unless:1 euclidean:3 divide:1 walk:1 desired:2 theoretical:10 minimal:1 instance:1 column:2 earlier:1 rao:1 gn:1 zn:6 a6:2 cost:3 deviation:1 entry:1 predictor:2 uniform:2 fastica:1 proximal:1 synthetic:5 combined:1 siam:4 probabilistic:1 off:1 invertible:2 together:1 quickly:1 sido:10 central:1 pr2:1 satisfied:2 opposed:1 slowly:1 worse:1 ek:1 derivative:3 leading:1 ekxn:1 rescaling:1 concordance:2 nonasymptotic:1 gy:1 b2:1 includes:2 satisfy:1 later:1 hazan:2 analyze:2 francis:3 sup:1 start:1 hf:2 decaying:5 aggregation:1 monro:1 square:37 largely:1 weak:1 accurately:1 finer:1 ago:1 explain:1 reach:2 whenever:1 naturally:1 proof:5 associated:1 dataset:4 knowledge:1 color:3 dimensionality:1 ubiquitous:1 improves:2 dbl:3 actually:1 attained:1 higher:1 supervised:1 response:1 done:1 strongly:7 generality:1 stage:2 replacing:2 a7:1 lack:1 logistic:31 quality:1 hal:3 b3:2 name:1 normalized:4 unbiased:1 true:3 remedy:1 kxn:6 i2:4 deal:1 self:4 inferior:2 sussex:1 coincides:1 generalized:1 performs:1 dedicated:1 flammarion:1 duchi:1 novel:5 ef:1 b4:1 exponentially:1 volume:1 extend:1 interpretation:1 tail:2 kluwer:1 significant:1 cambridge:2 smoothness:4 approx:1 tuning:1 erieure:1 similarly:3 longer:1 similarity:1 inf:2 scenario:1 certain:3 verlag:1 inequality:1 binary:1 arbitrarily:3 der:1 preserving:3 seen:4 minimum:1 relaxed:1 additional:3 greater:2 surely:5 novelty:1 converge:5 signal:2 semi:1 dashed:2 full:1 smooth:4 technical:4 faster:2 adapt:1 bach:8 a1:4 variant:2 regression:27 essentially:1 expectation:7 iteration:7 achieved:2 limn:3 crucial:1 extra:1 pass:3 hz:3 tend:1 practitioner:1 identically:3 decent:1 easy:1 iterate:3 affect:1 restrict:1 polyak:1 reduce:1 regarding:1 tradeoff:1 hxn:2 speech:1 hessian:2 eigenvectors:1 tune:2 amount:1 tsybakov:1 locally:1 simplest:1 shapiro:1 outperform:2 exist:1 sign:1 estimated:2 reinforces:1 key:2 iter:1 lan:2 anova:1 subgradient:2 year:1 sum:1 run:2 letter:1 extends:1 almost:5 reasonable:1 electronic:1 prefer:1 scaling:1 bit:2 bound:13 def:2 followed:1 display:1 quadratic:13 i4:2 covertype:8 precisely:1 constraint:1 bousquet:1 generates:1 aspect:2 optimality:1 min:1 rcv1:3 remain:1 enst:1 smaller:4 slightly:4 son:1 happens:1 projecting:1 invariant:3 outlier:1 macchi:1 unregularized:1 previously:1 remains:1 loose:1 needed:1 singer:2 available:1 gaussians:2 v2:1 appropriate:1 stepsize:2 schmidt:1 robustness:2 slower:1 rp:1 denotes:5 running:2 top:2 kushner:1 a4:2 newton:21 log10:29 classical:2 objective:1 already:1 quantity:1 strategy:2 parametric:1 dependence:2 traditional:1 surrogate:1 diagonal:2 exhibit:2 gradient:21 supn:1 subspace:1 distance:2 thank:1 trivial:1 reason:1 assuming:2 pointwise:3 relationship:2 priouret:1 ratio:1 minimizing:3 potentially:1 proper:1 perform:4 upper:2 observation:8 markov:7 datasets:14 benchmark:3 finite:6 descent:7 behave:1 situation:1 extended:1 team:1 perturbation:1 sharp:2 introduced:1 vacuous:1 pair:2 paris:2 specified:2 extensive:2 optimized:3 connection:1 acoustic:1 nip:2 beyond:2 able:1 below:1 sparsity:2 erratic:1 power:2 difficulty:1 ixn:2 residual:1 recursion:6 nedic:1 scheme:2 improve:2 ne:2 started:2 faced:1 text:1 acknowledgement:1 adagrad:12 asymptotic:6 loss:18 nonstrongly:1 expect:1 lecture:1 adaptivity:1 proportional:6 srebro:1 var:1 degree:2 consistent:1 imposes:1 share:1 surprisingly:1 last:2 supported:1 side:1 bias:1 understand:1 taking:1 barrier:1 sparse:8 distributed:4 van:1 curve:1 plain:2 xn:39 valid:1 avoids:2 yudin:1 quantum:2 adaptive:3 coincide:1 polynomially:1 transaction:1 excess:4 alpha:9 rthe:1 global:5 sequentially:1 overfitting:3 rid:1 b1:2 assumed:1 xi:1 shwartz:1 why:1 table:4 robust:2 nicolas:1 obtaining:2 improving:1 expansion:1 bottou:1 complex:2 european:1 protocol:1 main:1 underfit:1 noise:2 edition:1 n2:1 dyadic:1 telecom:1 referred:2 en:1 west:1 wiley:2 sub:1 explicit:1 lie:1 third:1 theorem:6 rk:7 showing:2 r2:37 decay:2 svm:1 a3:1 exists:3 nk:1 gap:3 logarithmic:1 yin:1 simply:1 likely:1 borkar:1 pathwise:1 partially:2 doubling:2 temporarily:1 tracking:1 springer:2 corresponds:3 minimizer:1 relies:1 harris:2 goal:1 acceleration:1 paristech:1 change:2 hard:1 typical:1 infinite:1 except:5 averaging:11 total:1 pas:4 experimental:1 concordant:1 support:11 mark:1 correlated:1 |
4,311 | 4,901 | Message Passing Inference with Chemical Reaction
Networks
Nils Napp
Ryan Prescott Adams
Wyss Institute for Biologically Inspired Engineering
Harvard University
Cambridge, MA 02138
School of Engineering and Applied Sciences
Harvard University
Cambridge, MA 02138
[email protected]
[email protected]
Abstract
Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks,
and linear systems. In the future such abstractions might enable nanoscale devices
that can sense and control the world at a molecular scale. Just as in macroscale
robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary
probabilistic graphical models, represented as factor graphs over discrete random
variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message
passing can be implemented in terms of reactions between chemical species whose
concentrations represent probabilities. We show algebraically that the steady state
concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured
graphs, and approximate solutions for loopy graphs.
1 Introduction
Recent advances in nanoscale devices and biomolecular synthesis have opened up new and exciting
possibilities for constructing microscopic systems that can sense and autonomously manipulate the
world. Necessary to such advances is the development of computational mechanisms and associated
abstractions for algorithmic control of these nanorobots. Work on molecular programming has explored the power of chemical computation [3, 6, 11] and resulted in in vitro biomolecular implementations of various such abstractions, including logic gates [16], artificial neural networks [9, 10, 14],
tiled self-assembly models [12, 15], and linear functions and systems [4, 13, 20]. Similarly, in
vivo gene regulatory networks can be designed that when transformed into cells implement devices
such as oscillators [8], intracellularly coupled oscillators [7], or distributed algorithms like pattern
formation [1]. Many critical information processing tasks can be framed in terms of probabilistic
inference, in which noisy or incomplete information is accumulated to produce statistical estimates
of hidden structure. In fact, we believe that this particular computational abstraction is ideally suited
to the noisy and often poorly characterized microscopic world. In this work, we develop a chemical
reaction network for performing inference in probabilistic graphical models. We show that message
passing schemes, such as belief propagation, map relatively straightforwardly onto sets of chemical
reactions, which can be thought of as the ?assembly language? of both in vitro and in vivo computation at the molecular scale. The long-term possibilities of such technology are myriad: adaptive
tissue-sensitive drug delivery, in situ chemical sensing, and identification of disease states.
1
Abstract Problem & Algorithm
Low-Level ?Assembly? Language
(1?1)
?1
P(1?1)
S1
?2
S(1?1)
P(2?2)
(1?1)
S2
S(2?2)
(3?2)
P(1?3)
x1
S0
P(2?3)
x2
?3
S(3?1)
(3?2)
S0
(a)
?r
GGGGB S0(1?1)
.
.
.
?3 (1,1)
(1?3)
?3 (2,1)
+ P2
S(3?2)
?r
GGGGB S0(1?1)
(1?3)
+ P1
Physical
Implementation
GGGGGGGB
GGGGGGGB
.
.
.
(3?2)
+ P1
(3?2)
+ P2
S1
S1
(b)
(1?3)
(1?3)
(c)
Figure 1: Inference at different levels of abstraction. (a) Factor graph over two random variables.
Inference can be performed efficiently by passing messages (shown as gray arrows) between vertices, see Section 2. (b) Message passing implemented at a lower level of abstraction. Chemical
species represent the different components of message vectors. The chemical reaction networks
constructed in Section 3 perform the same computation as the sum-product message passing algorithm. (c) Schematic representation of DNA strand displacement. A given reaction network can be
implemented in different physical systems, e.g. DNA strand displacement cascades [5, 17].
At the small scales of interest systems are typically modeled as deterministic chemical reaction networks or their stochastic counterparts that explicitly model fluctuations and noise. However, chemical reactions are not only models, but can be thought of as specifications or abstract computational
frameworks themselves. For example, arbitrary reaction networks can be simulated by DNA strand
displacement systems [5, 17], where some strands correspond to the chemical species in the specifying reaction network. Reactions rates in these systems can be tuned over many orders of magnitude
by adjusting the toehold length of displacement steps, and high order reactions can be approximated
by introducing auxiliary species. We take advantage of this abstraction by ?compiling? the sumproduct algorithm for discrete variable factor graphs into a chemical reaction network, where the
concentrations of some species represent conditional and marginal distributions of variables in the
graph. In some ways, this representation is very natural: while normalization is a constant concern
in digital systems, our chemical design conserves species within some subsets and thus implicitly
and continuously normalizes its estimates. The computation is complete when the reaction network
reaches equilibrium. Variables in the graph can be conditioned upon by adjusting the reaction rates
corresponding to unary potentials in the graph.
Section 2 provides a brief review of factor graphs and the sum-product algorithm. Section 3 introduces notation and concepts for chemical reaction networks. Section 4 shows how inference on
factor graphs can be compiled into reaction networks, and in Section 5, we show several example
networks and compare the results of molecular simulations to standard digital inference procedures.
To aid parsing the potentially tangled notation resulting from mixing probabilistic inference tools
with chemical reaction models, this paper follows these general notational guidelines: capital letters
denote constants, such as set sizes, and other quantities, such as tuples and message types; lower
case letters denote parameters, such as reaction rates and indices; bold face letters denote vectors
and subscripts elements of that vector; scripted upper letters indicate sets; random variables are
always denoted by x or their vector version; and species names have a sans-serif font.
2 Graphical Models and Probabilistic Inference
Graphical models are popular tools for reasoning about complicated probability distributions. In
most types of graphical models, vertices represent random variables and edges reflect dependence
structure. Here, we focus on the factor graph formalism, in which there are two types of vertices
that have a bipartite structure: variable nodes (typically drawn as circles), which represent random
variables, and factor nodes (typically drawn as squares), which represent potentials (also called
compatibility functions) coupling the random variables. Factor graphs, encode the factorization of a
probability distribution and therefore its conditional independence structure. Other graphical models, such as bayesian networks, can be converted to factor graphs, and thus factor graph algorithms
are directly applicable to other types of graphical models, see [2, Ch. 8].
2
Let G be a factor graph over N random variables {xn }N
n=1 where xn takes one of Kn discrete values.
The global N -dimensional random variable x takes on values in the (potentially huge) product space
Q
K = N
n=1 {1, ..., Kn }. The other nodes of G are called factors and every edge in G connects
exactly one factor node and one variable node. In general, G can have J factors {?j (xj )}Jj=1 where
we use xj to indicate the subset of random variables that are neighbors of factor
Q j, i.e. {xn | n ?
ne(j)}. Each xj takes on values in the (potentially much smaller) space Kj = n?ne(j) {1, ..., Kn },
and each ?j is a non-negative scalar function on Kj . Together the structure of G and the particular
factors ?j define a joint distribution on x
Pr(x) = Pr(x1 , x2 , ? ? ? , xN ) =
J
1 Y
?j (xj ),
Z j=1
(1)
where Z is the appropriate normalizing constant. Figure 1a shows a simple factor graph with two
variable nodes and three factors. It implies that the the joint distribution x1 and x2 has the form
Pr(x1 , x2 ) = Z1 ?1 (x1 )?2 (x2 )?3 (x1 , x2 ).
The sum-product algorithm (belief propagation) is an dynamic programming technique for performing marginalization in a factor graph. That is, it computes sums of the form
J
1 XY
Pr(xn ) =
?j (xj ).
(2)
Z
j=1
x\xn
For tree-structured factor graphs, the sum-product algorithm efficiently recovers the exact marginals.
For more general graphs the sum-product algorithm often converges to useful approximations, in
which case it is called loopy belief propagation.
The sum-product algorithm proceeds by passing ?messages? along the graph edges. There are two
kinds of messages messages from a factor node to a variable node and messages from a variable
node to a factor node. In order to make clear what quantities are represented by chemical species
concentrations in Section 4, we use somewhat unconventional notation. The kth entry of the sum
(j?n)
message from factor node j to variable node n is denoted Sk
and the entire Kn -dimensional
(j?n)
vector is denoted by S
. The kth entry of the product message from variable n to factor node j
(n?j)
is denoted by Pk
and the entire Kj -dimensional vector is denoted P(n?j) . Figure 1a shows a
simple factor graph with message names and their directions shown as gray arrows. Sum messages
from j are computed as the weighted sum of product messages over the domain Kj of ?j
X
Y
(n? ?j)
(j?n)
P kj
,
(3)
Sk
=
?j (xj = kj )
kjn =k
n? ?ne(j)\n
n?
where ne(j)\n refers to the variable node neighbors of j except n and kjn = k to the set of all
kj ? Kj where the entry in the dimension of n is fixed to k. Product messages are computed by
taking the component-wise product of incoming sum messages
Y (j ? ?n)
(n?j)
Sk
.
(4)
Pk
=
j ? ?ne(n)\j
Up to normalization, the marginals can be computed from the product of incoming sum messages
Y (j?n)
Sk
.
(5)
Pr(xn = k) =
j?ne(n)
The sum-product algorithm corresponds to fixed-point iterations that are minimizing the Bethe free
energy. This observation leads to both partial-update or damped variants of sum-product, as well
as asynchronous versions [18, Ch.6][19]. The validity of damped asynchronous sum-product is
what enables us to frame the computation as a chemical reaction network. The continuous ODE
description of species concentrations that represent messages can be thought of as an infinitesimally
small version of damped asynchronous update rules.
3 Chemical Reaction Networks
The following model describes how a set of M chemical species Z = {Z1 , Z2 , ? ? ? , ZM } interact
and their concentrations evolve over time. Each reaction has the general form
3
?
r1 Z1 + r2 Z2 + ? ? ? + rM ZM GGGGGGGB p1 Z1 + p2 Z2 + ? ? ? + pM ZM .
(6)
In this generic representation most of the coefficients rm ? N and pm ? N are typically zero (where
N indicates non-negative integers). The species on the left with non-zero coefficients are called
reactants and are consumed during the reaction. The species on the right with non-zero entries
are called products and are produced during the reaction. Species that participate in a reaction,
i.e., rm > 0, but where no net consumption or production occurs, i.e. rm = pm , are called catalysts.
They change the dynamics of a particular reaction without being changed themselves.
A reaction network over a given set of species consists of a set of Q reactions
R = {R1 , R2 , ? ? ? , RQ }, where each reaction is a triple of reaction parameters (6),
Rq = (rq , ?q , pq ).
(7)
For example, in a reaction Rq ? R where species Z1 and Z3 form a new chemical species Z2 at a
rate of ?q , the reactant vector rq is zero everywhere except at rq1 = rq3 = 1. The associated product
vector pq is zero everywhere except at pq2 = 1. In the reaction notation where non-participating
species are dropped reaction Rq is can be compactly written as
?q
Z1 + Z3 GGGGGGGB Z2 .
(8)
3.1 Mass Action Kinetics
The concentration of each chemical species Zm is denoted by [Zm ]. A reaction network describes
the evolution of species concentrations as a set of coupled non-linear differential equations. For each
species concentration [Zm ] the rate of change is given by mass action kinetics,
Q
M
Y
q
d[Zm ] X
?q
=
[Zm? ]rm? (pqm ? rqm ).
dt
?
q=1
(9)
m =1
PM
Based on the fact that reactant coefficients appear as powers, the sum m=1 rm is called the order
of a reaction. For example, if the only reaction in a network were the second order reaction (8), the
concentration dynamics of [Z1 ] would be
d[Z1 ]
= ??q [Z1 ][Z3 ].
dt
(10)
Similar to [4] we design reaction networks where the equilibrium concentration of some species
corresponds to the results we are interested in computing. The reaction networks in the following
section conserve mass and do not require flux in or out of the system, and therefore the solutions are
guaranteed to be bounded. While we cannot rule out oscillations in general, the message passing
methods these reactions are simulating correspond to an energy minimization problem. As such, we
suspect that the particular reaction networks presented here always converge to their equilibrium.
4 Representing Graphical Models with Reaction Networks
In the following compilation procedure, each message and marginal probability is represented by
a set of distinct chemical species. We design networks that cause them to interact in such a way
that, at steady state, the concentration of some species represent the marginal distributions of the
variable nodes in a factor graph. When information arrives the network equilibrates to the new,
correct, value. Since messages in the sum-product inference algorithm are computed from other
messages, the reaction networks that implement sending messages describe how species from one
message catalyze the species of another message.
Beliefs and messages are represented as concentrations of chemical species: each component of a
(j?n)
(j?n)
sum message, Sk
, has an associated chemical species Sk
; each component of a product
(n?j)
(n?j)
message, Pk
, has an associated chemical species Pk
; and each component of a marginal
probability distribution, Pr(xn = k), has an associated chemical species Pnk . In addition, each
message and marginal probability distribution has a chemical species with a zero subscript that represents unassigned probability mass. Together, the set of species associated with a messages or
4
marginal probability are called a belief species, and the reaction networks presented in the subsequent sections are designed to conserve species ? and by extension their concentrations ? with each
n
such set. For example, the concentration of belief species P n = {Pnk }K
k=0 of Pr(xn ) have a constant
PKn n
sum, k=0 [Pk ] , determined by the initial concentrations. These sets belief species are a chemical representation of the left hand sides of Equations 3?5. The next few sections present reaction
networks whose dynamics implement their right hand sides.
4.1 Belief Recycling Reactions
Each set of belief species has an associated set of reactions that recycle assigned probabilities to the
unassigned species. By continuously and dynamically reallocating probability mass, the resulting
reaction network can adapt to changing potential functions ?j , i.e. new information.
For example, the factor graph shown in Figure 1a has 8 distinct sets of belief species ? 2 representing
marginal probabilities of x1 and x2 , and 6 (ignoring messages to leaf factor nodes) representing
messages. The associate recycling reactions are
?r
(2?3)
GGGGB P(2?3)
.
0
(11)
(3?2)
GGGGB S(3?2)
0
Pk
GGGGB S(2?2)
0
Sk
?r
GGGGB P(1?3)
0
Pk
(2?2)
Sk
?r
?r
(1?3)
GGGGB S(3?1)
0
Sk
P2k GGGGB P20
?r
(3?1)
GGGGB S(1?1)
0
Sk
?r
?r
(1?1)
P1k GGGGB P10
?r
By choosing a smaller rate ?r less of the probability mass will be unassigned at steady state, i.e.
quantities will be closer to normalized, however the speed at which the reaction network reaches
steady state decreases, see Section 5.
4.2 Sum Messages
In the reactions that implement messages from factor to variable nodes, message species of incoming
messages catalyze the assignment of message species belonging to outgoing messages. The entries
in factor tables determine the associated rate constants. The kth message component from a factor
node ?j to the variable node xn is implemented by a reactions of the form
(j?n)
S0
X
+
n? ?ne(j)\n
X
?j (xj =kj )
(n? ?j)
GGGGGGGB Sk(j?n) +
Pkj
n?
(n? ?j)
Pkj
n? ?ne(j)\n
,
(12)
n?
where the nth component of kj is clamped to k, kjn = k. Using the law of mass action, the kinetics
for each sum message species are given by
(j?n)
d[Sk
dt
]
=
X
(j?n)
?j (xj = kj )[S0
]
kjn =k
Y
(n? ?j)
[Pkj
n? ?ne(j)\n
n?
(j?n)
] ? ?r [Sk
].
(13)
(j?n)
At steady state the concentration of Sk
?r
(j?n)
[S0
]
(j?n)
where all [Sk
is given by
X
(j?n)
[Sk
]=
?j (xj = kj )
Y
kjn =k
n? ?ne(j)\n
] species concentrations have the same factor
(n? ?j)
[Pkj
?r
(j?n)
[S0
],
(14)
n?
]
. Their relative concentra-
tions are exactly the message according to the to Equation (3). As ?r decreases, the concentration of
unassigned probability mass decreases and the concentration normalized by the constant sum of all
the related belief species can be interpreted as a probability. For example, the four factor-to-variable
messages in Figure 1(a) can be implemented with the following reactions
?1 (k)
(1?1)
GGGGGGGB S(1?1)
k
(2?2)
GGGGGGGB S(2?2)
k
S0
S0
?2 (k)
(2?3)
GGGGGGGB S(3?1)
+ Pk?
k
(1?3)
(1?3)
GGGGGGGB S(3?2)
+ Pk?
.
k
+ Pk?
(3?2)
+ Pk?
S0
5
?3 (k,k? )
(2?3)
(3?1)
S0
?3 (k? ,k)
(15)
?1
x1
?2
x2
?4
?1
?3
x1
x3
?5
?2
?6
?3
x2
x4
?7
?4
?6
(a)
?1 (1)
1
?4 (1, ?)
?4 (2, ?)
?1 (2)
0.1
?4 (?, 1)
1
0.1
?1? (1)
0.1
?4 (?, 2)
0.1
3
x3
?5
(b)
?1? (2)
1
?3 (1)
?3 (2)
?2 (1)
?2 (2)
1
0.1
2
1
?5 (?, 1)
?5 (?, 2)
?5 (?, 3)
?5 (1, ?)
0.1
2
0.1
?5 (2, ?)
3
0.1
1
?3 (3)
1
?6 (1, ?)
?6 (2, ?)
?7 (1)
1
?6 (?, 1)
0.1
1
?7 (2)
1
?6 (?, 2)
0.1
0.1
(c)
Figure 2: Examples of non-trivial factor graphs. (a) Four variable factor graph with binary factors.
The factor leafs can be used to specify information about a particular variable. (b) Example of a
small three variable cyclic graph. (c) Factors for (a) used in simulation experiments in Section. 5.1.
4.3 Product Messages
Reaction networks that implement variable to factor node messages have a similar, but slightly
simpler, structure. Again, each components species of the message is catalyzed by all incoming
messages species but only of the same component species. The rate constant for all product message
reactions is the same ?prod resulting in reactions of the following form
X (j ? ?n)
X (j ? ?n) ?prod
(n?j)
GGGGGGGB Pk(n?j) +
Sk
.
(16)
Sk
P0
+
j ? ?ne(n)\j
j ? ?ne(n)\j
The dynamics of the message component species is given by
(n?j)
Y
d[Pk
]
(j ? ?n)
(n?j)
(n?j)
[Sk
] ? ?r [Pk
].
= ?prod [P0
]
dt
?
(17)
j ?ne(n)\j
(n?j)
At steady state the concentration of Pk
?r
is given by
(n?j)
[Pk
]
(n?j)
?prod [P0
]
=
Y
(j ? ?n)
[Sk
].
(18)
j ? ?ne(n)\j
Since all component species of product messages have the same multiplier
(n?j)
?r
],
(n?j) [Pk
?prod [P0
]
the steady state species concentrations compute the correct message according to Equation 4. For
example, the two different sets of variable to factor messages in Figure 1a are
(1?3)
(1?1)
?prod
(1?3)
(1?1)
(2?3)
(2?2)
?prod
(2?3)
(2?2)
GGGGB Pk
GGGGB Pk
+ Sk
.
+ Sk
P0
+ Sk
P0
+ Sk
Similarly, the reactions to compute the marginal probabilities of x1 and x2 in Figure 1a are
(3?1)
P10 + Sk
(1?1)
+ Sk
(19)
?prod
(1?1)
GGGGB P1k + S(3?1)
+ Sk
k
(20)
?prod
(3?2)
(2?2)
GGGGB P2k + Sk(3?2) + S(2?2)
.
P20 + Sk
+ Sk
k
The two rate constants ?prod and ?r can be adjusted to tradeoff speed vs. accuracy, see Section 5.
Together, reactions for recycling probability mass, implementing sum-product messages, and implementing product messages define a reaction network whose equilibrium computes the messages and
marginal probabilities via the sum-product algorithm. As probability mass is continuously recycled,
messages computed on partial information will readjust and settle to the correct value. There is a
clear dependence of messages. Sum messages from leaf nodes do not depend on any other messages. Once they are computed, i.e. the reactions have equilibrated, the message species continue to
catalyze the next set of messages until they have reached the the correct value, etc.
6
?r = 0.1 ?r = 0.01
exact
slow
fast
Pr(x1 )
Pr(x1 )
0.692 0.308
0.690 0.306
0.661 0.294
Pr(x2 )
Pr(x2 )
0.598 0.402
0.583 0.393
0.449 0.302
0.392
0.394
0.379
Pr(x3 )
Pr(x3 )
0.526 0.083
0.520 0.083
0.508 0.080
Pr(x4 )
0.664 0.336
0.665 0.333
0.646 0.326
Figure 3: Inference results for factor graph in Figure 2(a). Colored boxes show the trajectories of
a belief species set in a simulated reaction network. The simulation time (3000 sec) is along the
x?dimension. Half way though the simulation the factor attached to x1 changes from ?1 to ?1? , and
the exact marginal distribution for each period is shown as a back-white dashed line. The white area
at the top indicates unassigned probability mass. These plots show the clear tradeoff between speed
(higher value of ?r ) and accuracy (less unassigned probability mass). The exact numerical answers
at 3000 sec are given in the table.
5 Simulation Experiments
This section presents simulation results of factor graphs that have been compiled into reaction networks via the procedure in Section 4. All simulations were performed using the SimBiology Toolbox
in Matlab with the ?sundials? solver. The conserved concentrations for all sets of belief species were
set to 1, so plots of concentrations can be directly interpreted as probabilities. Figure 2 shows two
graphical models for which we present detailed simulation results in the next two sections.
5.1 Tree-Structured Factor Graphs
To demonstrate the functionality and features of the compilation procedure described in Section 4,
we compiled the 4 variable factor graph shown in Figure 2a into a reaction network. When x1 , x2 , x3
and x4 have discrete states K1 = K2 = K4 = 2 and K3 = 3, the resulting network has 64 chemical
species and 105 reactions. The largest reaction is of 5th order to compute the marginal distribution
of x2 . We instantiated the factors as shown in Figure 2c and initialized all message and marginal
species to be uniform. To show that the network continuously performs inference and can adapt
to new information, we changed the factor ?1 to ?1? half way through the simulation. In terms of
information, the new factor implies that Pr(x1 = 2) is suddenly more likely. In terms of reactions
(1?1)
(1?1)
the change means that S0
is now more likely to turn into S2
. In a biological reaction
network, such a change could be induced by up-regulating, or activating a catalyst due to a new
chemical signal. This new information changes the probability distribution of all variables in the
graph and the network equilibrates to these new values, see Figure 3.
The only two free parameters are ?prod and ?r . Since only ?r has an direct effect on all sets of belief
species, we fixed ?prod = 50 and varied ?r . Small values of ?r results in better approximation as less
of the probability mass in each belief species set is in an unassigned state. However, small values of
?r slow the dynamics of the network. Larger values of ?r result in faster dynamics, but more of the
probability mass remains unassigned, top white area in Figure 3. We should note, that at equilibrium,
the relative assignments of probabilities are still correct, see Equation 14 and Equation 18.
The compilation procedure also works for factor graphs with larger factors. When replacing the two
of the binary factors ?5 and ?6 in Figure 2a with a new tertiary factor ?5? that is connected to x2 ,x3 ,
and x4 the compiled reaction network has 58 species and 115 reactions. The largest reaction is of
order 4. Larger factors can reduce the number of species since there are fewer edges and associated
messages to represent, however, the domain sizes Kj of the individual factors grows exponentially
and in the number of neighbors and thus require more reactions to implement.
7
(a)
(b)
Figure 4: (a) The belief of Pr(x1 = 1) as function of iteration in loopy belief propagation. All
messages are updated simultaneously at every time step. After 100 iterations the oscillations abate
and the belief converges to the correct estimate indicated by the dashed line. (b) Trajectory of PAi
species concentrations. The simulation time is 3000 sec and the different colors indicate the belief
of about either of the two states. The dotted line indicates the exact marginal distribution of x1 .
5.2 Loopy Belief Propagation
These networks can also be used on factor graphs that are not trees. Figure 2b shows a cyclic graph
which we compiled to reactions and simulated. When Kn = 2 for all variables the resulting reaction
network has 54 species and 84 reactions. We chose factor tables that anti-correlate neighbors and
leaf factors that prefer the same state.
Figure 4 shows the results of performing both loopy belief propagation and simulation results for
the compiled reaction network. Both exhibit decaying oscillations, but settle to the correct marginal
distribution. Since the reaction network is essentially performing damped loopy belief propagation
with an infinitesimal time step, the reaction network implementation should always converge.
6 Conclusion
We present a compilation procedure for taking arbitrary factor graphs over discrete random variables and construct a reaction network that performs the sum-product message passing algorithm
for computing marginal distributions.
These reaction networks exploit the fact that the message structure of the sum-product algorithm
maps neatly onto the model of mass action kinetics. By construction, conserved sets of belief species
in the network perform implicit and continuous normalization of all messages and marginal distributions. The correct behavior of the network implementation relies on higher order reactions to
implement multiplicative operations. However, physically high order reaction are exceedingly unlikely to proceed in a single step. While we can simulate and validate our implementation with
respect to the mass action model, a physical implementation will require an additional translation
step, e.g. along the lines of [17] with intermediate species of binary reactions.
One aspect that this paper did not address, but we believe is important, is how parameter uncertainty and noise affect the reaction network implementations of inference algorithms. Ideally, they
would be robust to parameter uncertainty and random fluctuations. To address the former one could
directly compute the parameter sensitivity in this deterministic model. To address the latter, we
plan to look at other semantic interpretations of chemical reaction networks, such as the linear noise
approximation or the stochastic chemical kinetics model.
In addition to further analyzing this particular algorithm we would like to implement others, e.g.
max-product, parameter learning, and dynamic state estimation, as reaction networks. We believe
that statistical inference provides the right tools for tackling noise and uncertainty at a microscopic
level, and that reaction networks are the right language for specifying systems at that scale.
Acknowledgements
We are grateful to Wyss Institute for Biologically Inspired Engineering at Harvard, especially Prof.
Radhika Nagpal, for supporting this research. We would also like to thank our colleagues and
reviewers for their helpful feedback.
8
References
[1] Subhayu Basu, Yoram Gerchman, Cynthia H. Collins, Frances H. Arnold, and Ron Weiss. A synthetic
multicellular system for programmed pattern formation. Nature, 434:1130?1134, 2005.
[2] Christopher M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics.
Springer, 2006.
[3] Luca Cardelli and Gianluigi Zavattaro. On the computational power of biochemistry. In Katsuhisa Horimoto, Georg Regensburger, Markus Rosenkranz, and Hiroshi Yoshida, editors, Algebraic Biology, volume
5147 of Lecture Notes in Computer Science, pages 65?80. Springer Berlin Heidelberg, 2008.
[4] Ho-Lin Chen, David Doty, and David Soloveichik. Deterministic function computation with chemical
reaction networks. In Darko Stefanovic and Andrew Turberfield, editors, DNA Computing and Molecular Programming, volume 7433 of Lecture Notes in Computer Science, pages 25?42. Springer Berlin
Heidelberg, 2012.
[5] Yuan-Jyue Chen, Neil Dalchau, Niranjan Srinivas, Andrew Phillips, Luca Cardelli, David Soloveichik,
and Georg Seelig. Programmable chemical controllers made from DNA. Nature Nanotechnology,
8(10):755?762, October 2013.
[6] Matthew Cook, David Soloveichik, Erik Winfree, and Jehoshua Bruck. Programmability of chemical
reaction networks. In Algorithmic Bioprocesses, Natural Computing Series, pages 543?584. Springer
Berlin Heidelberg, 2009.
[7] Tal Danino, Octavio Mondragon-Palomino, Lev Tsimring, and Jeff Hasty. A synchronized quorum of
genetic clocks. Nature, 463:326?330, 2010.
[8] Michael B. Elowitz and Stanislas Leibler. A synthetic oscillatory network of transcriptional regulators.
Nature, 403:335?338, 2000.
[9] A Hjelmfelt, E D Weinberger, and J Ross. Chemical implementation of neural networks and Turing
machines. Proceedings of the National Academy of Sciences, 88(24):10983?10987, 1991.
[10] Erik Winfree Jongmin Kim, John J. Hopfield. Neural network computation by in vitro transcriptional
circuits. In Advances in Neural Information Processing Systems 17 (NIPS 2004). MIT Press, 2004.
[11] Marcelo O. Magnasco. Chemical kinetics is Turing universal. Phys. Rev. Lett., 78:1190?1193, Feb 1997.
[12] Chengde Mao, Thomas H. LaBean, John H. Reif, and Nadrian C. Seeman. Logical computation using
algorithmic self-assembly of DNA triple-crossover molecules. Nature, 407:493?496, 2000.
[13] K. Oishi and E. Klavins. Biomolecular implementation of linear I/O systems. Systems Biology, IET,
5(4):252?260, 2011.
[14] Lulu Qian, Erik Winfree, and Jehoshua Bruck. Neural network computation with DNA strand displacement cascades. Nature, 475:368?372, 2011.
[15] Paul W. K Rothemund, Nick Papadakis, and Erik Winfree. Algorithmic self-assembly of DNA Sierpinski
triangles. PLoS Biol, 2(12):e424, 12 2004.
[16] Georg Seelig, David Soloveichik, David Yu Zhang, and Erik Winfree. Enzyme-free nucleic acid logic
circuits. Science, 314(5805):1585?1588, 2006.
[17] David Soloveichik, Georg Seelig, and Erik Winfree. DNA as a universal substrate for chemical kinetics.
Proceedings of the National Academy of Sciences, 107(12):5393?5398, 2010.
[18] Benjamin Vigoda. Analog Logic: Continuous-Time Analog Circuits for Statistical Signal Processing.
PhD thesis, Massachusetts Institute of Technology, 2003.
[19] Jonathan S. Yedidia, W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. Information Theory, IEEE Transactions on, 51(7):2282?2312,
2005.
[20] David Yu Zhang and Georg Seelig. DNA-based fixed gain amplifiers and linear classifier circuits. In
Yasubumi Sakakibara and Yongli Mi, editors, DNA Computing and Molecular Programming, volume
6518 of Lecture Notes in Computer Science, pages 176?186. Springer Berlin Heidelberg, 2011.
9
| 4901 |@word version:3 simulation:12 p0:6 initial:1 cyclic:2 series:1 tuned:1 genetic:1 reaction:90 z2:5 tackling:1 written:1 parsing:1 nanoscale:2 john:2 numerical:1 subsequent:1 enables:1 designed:2 plot:2 update:2 v:1 half:2 leaf:4 device:4 fewer:1 cook:1 jongmin:1 tertiary:1 colored:1 provides:2 node:21 ron:1 simpler:1 zhang:2 along:3 constructed:1 direct:1 differential:1 yuan:1 consists:1 behavior:1 p1:3 themselves:2 inspired:2 freeman:1 solver:1 notation:4 bounded:1 circuit:4 mass:16 what:2 kind:1 interpreted:2 every:2 exactly:2 rm:6 k2:1 classifier:1 control:2 appear:1 engineering:3 dropped:1 vigoda:1 analyzing:1 lev:1 subscript:2 fluctuation:2 might:1 chose:1 dynamically:1 specifying:2 compile:1 factorization:1 programmed:1 implement:9 x3:6 procedure:8 displacement:5 area:2 universal:2 drug:1 crossover:1 thought:3 cascade:2 seelig:4 prescott:1 refers:1 onto:2 cannot:1 tangled:1 map:2 deterministic:3 reviewer:1 yoshida:1 qian:1 rule:2 updated:1 construction:1 exact:6 programming:5 substrate:1 harvard:5 element:1 associate:1 approximated:1 conserve:3 recognition:1 intracellularly:1 connected:1 autonomously:1 plo:1 decrease:3 disease:1 rq:6 environment:1 benjamin:1 ideally:2 dynamic:8 depend:1 grateful:1 myriad:1 upon:1 bipartite:1 triangle:1 compactly:1 joint:2 hopfield:1 represented:4 various:1 distinct:2 fast:1 describe:1 instantiated:1 hiroshi:1 artificial:1 formation:2 choosing:1 whose:3 larger:3 statistic:1 neil:1 p1k:2 noisy:2 pnk:2 advantage:1 net:1 product:30 zm:8 mixing:1 poorly:1 academy:2 description:1 validate:2 participating:1 r1:2 sea:1 produce:1 adam:1 converges:2 tions:1 coupling:1 develop:2 andrew:2 school:1 equilibrated:1 p2:3 implemented:5 auxiliary:1 indicate:3 implies:2 synchronized:1 direction:1 correct:8 functionality:1 opened:1 stochastic:2 enable:1 settle:2 implementing:2 pkj:4 require:3 activating:1 biological:1 ryan:1 adjusted:1 extension:1 sans:1 kinetics:7 equilibrium:5 algorithmic:4 k3:1 matthew:1 pkn:1 biochemistry:1 estimation:1 applicable:1 ross:1 sensitive:1 largest:2 tool:3 weighted:1 minimization:1 mit:1 always:3 unassigned:8 encode:1 focus:1 notational:1 indicates:3 kim:1 sense:2 helpful:1 inference:16 abstraction:8 unary:1 accumulated:1 typically:5 entire:2 unlikely:1 hidden:1 transformed:1 rpa:1 interested:1 france:1 compatibility:1 denoted:6 development:1 plan:1 hasty:1 marginal:17 once:1 construct:1 x4:4 represents:1 pai:1 look:1 biology:2 yu:2 future:1 stanislas:1 others:1 few:1 simultaneously:1 resulted:1 national:2 individual:1 connects:1 amplifier:1 interest:1 message:69 huge:1 possibility:3 regulating:1 situ:1 introduces:1 arrives:1 damped:4 compilation:4 programmability:1 edge:4 pq2:1 partial:2 necessary:1 xy:1 closer:1 tree:4 incomplete:1 reif:1 initialized:1 circle:1 reactant:3 formalism:1 assignment:2 loopy:6 introducing:1 vertex:3 subset:2 entry:5 uniform:1 gianluigi:1 straightforwardly:1 kn:5 answer:1 synthetic:2 sensitivity:1 probabilistic:5 michael:1 synthesis:1 continuously:4 together:3 p2k:2 again:1 reflect:1 thesis:1 equilibrates:2 potential:3 converted:1 bold:1 sec:3 coefficient:3 explicitly:1 performed:2 multiplicative:1 reached:1 decaying:1 complicated:1 vivo:2 square:1 marcelo:1 accuracy:2 acid:1 efficiently:2 correspond:3 yield:1 identification:1 bayesian:1 produced:1 trajectory:2 sakakibara:1 tissue:1 oscillatory:1 reach:2 phys:1 infinitesimal:1 energy:3 colleague:1 associated:9 mi:1 recovers:1 gain:1 adjusting:2 popular:1 massachusetts:1 logical:1 color:1 back:1 higher:2 dt:4 quorum:1 specify:1 wei:2 box:1 though:1 just:1 implicit:1 until:1 clock:1 hand:2 replacing:1 christopher:1 biomolecular:3 propagation:8 gray:2 indicated:1 believe:3 grows:1 name:2 effect:1 validity:1 concept:1 normalized:2 multiplier:1 counterpart:1 evolution:1 former:1 assigned:1 chemical:40 leibler:1 semantic:1 white:3 during:2 self:3 steady:7 generalized:1 complete:1 demonstrate:1 performs:2 reasoning:1 wise:1 vitro:3 physical:3 attached:1 exponentially:1 volume:3 analog:2 interpretation:1 marginals:2 cambridge:2 phillips:1 framed:1 pm:4 similarly:2 neatly:1 language:3 pq:2 p20:2 specification:1 compiled:6 etc:1 feb:1 enzyme:1 recent:2 binary:3 continue:1 conserved:2 p10:2 stefanovic:1 additional:1 somewhat:1 algebraically:1 converge:2 determine:1 period:1 dashed:2 signal:2 faster:1 characterized:1 adapt:2 long:1 lin:1 luca:2 molecular:7 manipulate:1 niranjan:1 papadakis:1 schematic:1 variant:1 controller:1 essentially:1 winfree:6 physically:1 iteration:3 represent:9 normalization:3 lulu:1 robotics:1 cell:1 scripted:1 addition:2 ode:1 suspect:1 induced:1 integer:1 intermediate:1 marginalization:2 independence:1 xj:9 affect:1 reduce:1 tradeoff:2 consumed:1 algebraic:1 passing:9 cause:1 jj:1 action:5 matlab:1 proceed:1 programmable:1 useful:1 clear:3 detailed:1 dna:11 dotted:1 discrete:5 georg:5 four:2 drawn:2 capital:1 changing:1 k4:1 magnasco:1 graph:36 sum:29 turing:2 letter:4 uncertainty:4 everywhere:2 oscillation:3 delivery:1 prefer:1 guaranteed:1 x2:15 markus:1 tal:1 aspect:1 speed:3 simulate:1 regulator:1 performing:4 darko:1 infinitesimally:1 relatively:1 structured:3 jehoshua:2 according:2 recycle:1 catalyze:3 belonging:1 smaller:2 describes:2 slightly:1 rev:1 biologically:2 s1:3 pr:16 equation:6 remains:1 turn:1 mechanism:1 unconventional:1 sending:1 operation:1 yedidia:1 appropriate:1 generic:1 simulating:1 weinberger:1 compiling:1 gate:2 ho:1 thomas:1 top:2 assembly:5 graphical:9 recycling:3 exploit:1 yoram:1 k1:1 readjust:1 especially:1 prof:1 suddenly:1 quantity:3 occurs:1 font:1 concentration:25 dependence:2 transcriptional:2 microscopic:3 exhibit:1 kth:3 thank:1 simulated:3 berlin:4 consumption:1 participate:1 trivial:1 reason:1 erik:6 length:1 modeled:2 index:1 z3:3 minimizing:1 october:1 potentially:3 negative:2 implementation:9 design:3 guideline:1 perform:2 upper:1 observation:1 nucleic:1 anti:1 supporting:1 frame:1 varied:1 arbitrary:3 sumproduct:1 sundial:1 david:8 toolbox:1 z1:9 nick:1 nip:1 seeman:1 address:3 proceeds:1 wy:3 pattern:3 including:2 max:1 belief:24 power:3 critical:2 natural:2 bruck:2 nth:1 representing:3 scheme:1 technology:2 brief:1 ne:14 coupled:2 kj:13 review:1 acknowledgement:1 evolve:1 relative:2 law:1 catalyst:2 lecture:3 triple:2 digital:2 s0:13 exciting:1 editor:3 production:1 normalizes:1 translation:1 changed:2 free:4 asynchronous:3 biomolecules:1 side:2 srinivas:1 institute:3 neighbor:4 basu:1 face:1 taking:2 arnold:1 distributed:1 feedback:1 dimension:2 xn:10 world:3 lett:1 computes:2 exceedingly:1 pqm:1 adaptive:1 made:1 flux:1 correlate:1 transaction:1 approximate:1 implicitly:1 logic:4 gene:1 global:1 incoming:4 tuples:1 continuous:3 regulatory:1 prod:12 iet:1 sk:30 table:3 learn:1 bethe:1 robust:1 nature:6 molecule:1 ignoring:1 recycled:1 interact:2 heidelberg:4 constructing:2 domain:2 did:1 pk:19 arrow:2 s2:2 noise:4 paul:1 x1:17 slow:2 aid:1 mao:1 clamped:1 elowitz:1 bishop:1 cynthia:1 sensing:1 explored:2 r2:2 concern:1 normalizing:1 serif:1 multicellular:1 phd:1 magnitude:1 conditioned:1 chen:2 suited:1 likely:2 strand:5 scalar:1 springer:5 ch:2 corresponds:2 relies:1 ma:2 conditional:2 oscillator:2 catalyzed:1 jeff:1 change:6 determined:1 except:3 nanotechnology:1 nil:1 specie:63 called:8 tiled:1 latter:1 collins:1 jonathan:1 outgoing:1 biol:1 |
4,312 | 4,902 | Information-theoretic lower bounds for distributed
statistical estimation with communication constraints
Yuchen Zhang1
John C. Duchi1
Michael I. Jordan1,2
Martin J. Wainwright1,2
1
Department of Electrical Engineering and Computer Science and 2 Department of Statistics
University of California, Berkeley
Berkeley, CA 94720
{yuczhang,jduchi,jordan,wainwrig}@eecs.berkeley.edu
Abstract
We establish lower bounds on minimax risks for distributed statistical estimation under a communication budget. Such lower bounds reveal the minimum
amount of communication required by any procedure to achieve the centralized
minimax-optimal rates for statistical estimation. We study two classes of protocols: one in which machines send messages independently, and a second allowing
for interactive communication. We establish lower bounds for several problems,
including various types of location models, as well as for parameter estimation in
regression models.
1
Introduction
Rapid growth in the size and scale of datasets has fueled increasing interest in statistical estimation
in distributed settings [see, e.g., 5, 23, 7, 9, 17, 2]. Modern data sets are often too large to be stored
on a single machine, so that it is natural to consider methods that involve multiple machines, each
assigned a smaller subset of the full dataset. An essential design parameter in such methods is the
amount of communication required between machines or chips. Bandwidth limitations on network
and inter-chip communication often impose significant bottlenecks on algorithmic efficiency.
The focus of the current paper is the communication complexity of various classes of statistical estimation problems. More formally, suppose that we are interested in estimating the parameter ? of
some unknown distribution P , based on a dataset of N i.i.d. samples. In the classical setting, one
considers centralized estimators that have access to all N samples, and for a given estimation problem, the optimal performance over all centralized schemes can be characterized by the minimax rate.
By way of contrast, in the distributed setting, one is given m different machines, and each machine
is assigned a subset of samples of size n = ? N
m ?. Each machine may perform arbitrary operations
on its own subset of data, and it then communicates results of these intermediate computations to
the other processors or to a central fusion node. In this paper, we try to answer the following question: what is the minimal number of bits that must be exchanged in order to achieve the optimal
estimation error achievable by centralized schemes?
There is a substantial literature on communication complexity in many settings, including function
computation in theoretical computer science (e.g., [21, 1, 13]), decentralized detection and estimation (e.g., [18, 16, 15]) and information theory [11]. For instance, Luo [15] considers architectures in
which machines may send only a single bit to a centralized processor; for certain problems, he shows
that if each machine receives a single one-dimensional sample, it is possible to achieve the optimal
centralized rate up to constant factors. Among other contributions, Balcan et al. [2] study Probably
Approximately Correct (PAC) learning in the distributed setting; however, their stated lower bounds
do not involve the number of machines. In contrast, our work focuses on scaling issues, both in terms
of the number of machines as well as the dimensionality of the underlying data, and we formalize
the problem in terms of statistical minimax theory.
1
More precisely, we study the following problem: given a budget B of the total number of bits that
may be communicated from the m distributed datasets, what is the minimax risk of any estimator
based on the communicated messages? While there is a rich literature connecting informationtheoretic techniques with the risk of statistical estimators (e.g. [12, 22, 20, 19]), little of it characterizes the effects of limiting communication. In this paper, we present some minimax lower bounds for
distributed statistical estimation. By comparing our lower bounds with results in statistical estimation, we can identify the minimal communication cost that a distributed estimator must pay to have
performance comparable to classical centralized estimators. Moreover, we show how to leverage
recent work [23] to achieve these fundamental limits.
2
Problem setting and notation
We begin with a formal description of the statistical estimation problems considered here. Let
P denote a family of distributions and let ? : P ? ? ? Rd denote a function defined on P.
A canonical example throughout the paper is the problem of mean estimation, in which ?(P ) =
EP [X]. Suppose that, for some fixed but unknown member P of P, there are m sets of data stored
on individual machines, where each subset X (i) is an i.i.d. sample of size n from the unknown
distribution P .1 Given this distributed collection of local data sets, our goal is to estimate ?(P )
based on the m samples X (1) , . . . , X (m) , but using limited communication.
We consider a class of distributed protocols ?, in which at each round t = 1, 2, . . ., machine i sends a
message Yt,i that is a measurable function of the local data X (i) , and potentially of past messages. It
is convenient to model this message as being sent to a central fusion center. Let Y t = {Yt,i }i?[m] denote the collection of all messages sent at round t. Given a total of T rounds, the protocol ? collects
b 1 , . . . , Y T ). The length Lt,i of
the sequence (Y 1 , . . . , Y T ), and constructs an estimator ?b := ?(Y
P T Pm
message Yt,i is the minimal number of bits required to encode it, and the total L = t=1 i=1 Lt,i
of all messages sent corresponds to the total communication cost of the protocol. Note that the communication cost is a random variable, since the length of the messages may depend on the data, and
the protocol may introduce auxiliary randomness.
It is useful to distinguish two different classes, namely independent versus interactive protocols. An
independent protocol ? is based on a single round (T = 1) of communication, in which machine
i sends message Y1,i to the fusion center. Since there are no past messages, the message Y1,i can
depend only on the local sample X (i) . Given a family P, the class of independent protocols with
budget B ? 0 is given by
X
m
(1)
Aind (B, P) = independent protocols ? such that sup EP
Li ? B .
P ?P
i=1
(For simplicity, we use Yi to indicate the message sent from processor i and Li to denote its length
in the independent case.) It can be useful in some situations to have more granular control on the
amount of communication, in particular by enforcing budgets on a per-machine basis. In such cases,
we introduce the shorthand B1:m = (B1 , . . . , Bm ) and define
Aind (B1:m , P) = independent protocols ? such that sup EP [Li ] ? Bi for i ? [m] . (2)
P ?P
In contrast to independent protocols, the class of interactive protocols allows for interaction at different stages of the message passing process. In particular, suppose that machine i sends message
Yt,i to the fusion center at time t, who then posts it on a ?public blackboard,? where all machines can
read Yt,i . We think of this as a global broadcast system, which may be natural in settings in which
processors have limited power or upstream capacity, but the centralized fusion center can send messages without limit. In the interactive setting, the message Yt,i should be viewed as a measurable
function of the local data X (i) , and the past messages Y 1:t?1 . The family of interactive protocols
with budget B ? 0 is given by
Ainter (B, P) = interactive protocols ? such that sup EP [L] ? B .
(3)
P ?P
1
Although we assume in this paper that every machine has the same amount of data, our technique generalizes easily to prove tight lower bounds for distinct data sizes on different machines.
2
We conclude this section by defining the minimax framework used throughout this paper. We wish to
characterize the best achievable performance of estimators ?b that are functions of only the messages
(Y 1 , . . . , Y T ). We measure the quality of a protocol and estimator ?b by the mean-squared error
i
h
b 1 , . . . , Y T ) ? ?(P )k2 ,
EP,? k?(Y
2
where the expectation is taken with respect to the protocol ? and the m i.i.d. samples X (i) of size
n from distribution P . Given a class of distributions P, parameter ? : P ? ?, and communication
budget B, the minimax risk for independent protocols is
2
b
ind
(4)
M (?, P, B) :=
inf
inf sup EP,?
?(Y1 , . . . , Ym ) ? ?(P )
.
??Aind (B,P) ?b P ?P
2
Here, the infimum is taken jointly over all independent procotols ? that satisfy the budget constraint
B, and over all estimators ?b that are measurable functions of the messages in the protocol. This minimax risk should also be understood to depend on both the number of machines m and the individual
sample size n. The minimax risk for interactive protocols, denoted by Minter , is defined analogously,
where the infimum is instead taken over the class of interactive protocols. These communicationdependent minimax risks are the central objects in this paper: they provide a sharp characterization
of the optimal rate of statistical estimation as a function of the communication budget B.
3
Main results
With our setup in place, we now turn to the statement of our main results, along with some discussion
of their consequences.
3.1
Lower bound based on metric entropy
We begin with a general but relatively naive lower bound that depends only on the geometric strucd
ture of the parameter space, as captured by
its metric
entropy. In particular, given a subset ? ? R ,
we say {?1 , . . . , ?K } are ?-separated if
?i ? ?j
2 ? ? for i 6= j. We then define the packing
entropy of ? as
log M? (?) := log2 max K ? N | {?1 , . . . , ?K } ? ? are ?-separated .
(5)
The function ? 7? log M? (?) is left-continuous and non-increasing in ?, so we may define the
?1
inverse function log M?
(B) := sup{? | log M? (?) ? B}.
Proposition 1 For any family of distributions P and parameter set ? = ?(P), the interactive
minimax risk is lower bounded as
1
2
?1
Minter (?, P, B) ?
log M?
(2B + 2) .
(6)
4
Of course, the same lower bound also holds for Mind (?, P, B), since any independent protocol is
a special case of an interactive protocol. Although Proposition 1 is a relatively generic statement,
not exploiting any particular structure of the problem, it is in general unimprovable by more than
constant factors, as the following example illustrates.
Example: Bounded mean estimation. Suppose that our goal is to estimate the mean ? = ?(P )
of a class of distributions P supported on the interval [0, 1], so that ? = ?(P) = [0, 1]. Suppose
that a single machine (m = 1) receives n i.i.d. observations Xi according to P . Since the packing
entropy is lower bounded as log M? (?) ? log(1/?), the lower bound (6) implies
e?2 ?2B
Mind (?, P, B) ? Minter (?, P, B) ?
e
.
4
?2
Thus, setting B = 21 log n yields the lower bound Mind (?, P([0, 1]), B) ? e4n . This lower bound
is sharp up to the constant pre-factor, since it can be achieved by a simple P
method. Given its n
n
observations, the single machine can compute the sample mean X n = n1 i=1 Xi . Since the
sample mean lies in the interval [0, 1], it can be quantized to accuracy 1/n using log(n) bits, and this
quantized version ?b can be transmitted. A straightforward calculation shows that E[(?b ? ?)2 ] ? n2 ,
so Proposition 1 yields an order-optimal bound in this case.
3
3.2
Multi-machine settings
We now turn to the more interesting multi-machine setting (m > 1). Let us study how the budget
B?meaning the of bits required to achieve the minimax rate?scales with the number of machines
m. We begin by considering the uniform location family U = {P? , ? ? [?1, 1]}, where P? is
the uniform distribution on the interval [? ? 1, ? + 1]. For this problem, a direct application of
Proposition 1 gives a nearly sharp result.
Corollary 1 Consider the uniform location family U with n i.i.d. observations per machine:
(a) Whenever the communication budget is upper bounded as B ? log(mn), there is a universal constant c such that
c
Minter (?, U , B) ?
.
(mn)2
(b) Conversely, given a budget of B = 2 + 2 ln m log(mn) bits, there is a universal constant
c? such that
c?
Minter (?, U , B) ?
.
(mn)2
If each of m machines receives n observations, we have a total sample size of mn, so the minimax
rate over all centralized procedures scales as 1/(mn)2 (for instance, see [14]). Consequently, Corollary 1(b) shows that the number of bits required to achieve the centralized rate has only logarithmic
dependence on the number m of machines. Part (a) shows that this logarithmic dependence on m is
unavoidable.
It is natural to wonder whether such logarithmic dependence holds more generally. The following
result shows that it does not: for some problems, the dependence on m must be (nearly) linear. In
particular, we consider estimation in a normal location family model, where each machine receives
an i.i.d. sample of size n from a normal distribution N(?, ? 2 ) with unknown mean ?.
Theorem 1 For the univariate normal family N = {N(?, ? 2 ) | ? ? [?1, 1]}, there is a universal
constant c such that
m
m
mn
?2
inter
.
(7)
min
,
,
M
(?, N , B) ? c
mn
? 2 log m B log m
The centralized minimax rate for estimating a univariate normal mean based on mn observations
?2
is mn
; consequently, the lower bound (7) shows that at least B = ? logmm bits are required for a
decentralized procedure to match the centralized rate in this case. This type of scaling is dramatically different than the logarithmic scaling for the uniform family, showing that establishing sharp
communication-based lower bounds requires careful study of the underlying family of distributions.
3.3
Independent protocols in multi-machine settings
Departing from the interactive setting, in this section we focus on independent protocols, providing
somewhat more general results than those for interactive protocols. We first provide lower bounds
for the problem of mean estimation in the parameter for a d-dimensional normal location family
Nd = {N(?, ? 2 Id?d ) | ? ? ? = [?1, 1]d },
(8)
Theorem 2 For i = 1, . . . , m, assume that each machine has communication budget Bi , and receives an i.i.d. sample of size n from a distribution P ? Nd . There exists a universal (numerical)
constant c such that
)
(
2
?
d
m
m
mn
Mind (?, Nd , B1:m ) ? c
.
(9)
,
min
, Pm
Bi
mn
? 2 log m
i=1 min{1, d } log m
Given centralized access to the full mn-sized sample, a reasonable procedure would be to compute
2
the sample mean, leading to an estimate with mean-squared error ?mnd , which is minimax optimal.
4
Consequently, Theorem 2 shows that to achieve an order-optimal mean-squared error, the total number of bits communicated must (nearly) scale with the product of the dimension d and number of
machines m, that is, as dm/ log m. If we ignore logarithmic factors, this lower bound is achievable
by a simple procedure: each machine computes the sample mean of its local data and quantizes each
coordinate to precision ? 2 /n using O(d log(n/? 2 )) bits. These quantized sample averages are communicated to the fusion center using B = O(dm log(n/? 2 )) total bits. The fusion center averages
them, obtaining an estimate with mean-squared error of optimal order ? 2 d/(mn) as required.
We finish this section by presenting a result that is sharp up to numerical constant prefactors. It is a
minimax lower bound for mean estimation over the family Pd = {P supported on [?1, 1]d }.
Proposition 2 Assume that each of m machines receives a single sample (n = 1) from a distribution
in Pd . There exists a universal (numerical) constant c such that
(
)
m
d
min m, Pm
Mind (?, Pd , B1:m ) ? c
,
(10)
Bi
m
i=1 min{1, d }
where Bi is the budget for machine i.
The standard minimax rate for d-dimensional mean estimation
Pm scales as d/m. The lower bound (10)
shows that in order to achieve this scaling, we must have i=1 min{1, Bdi } & m, showing that each
machine must send Bi & d bits.
Moreover, this lower bound is achievable by a simple scheme. Suppose that machine i receives
a d-dimensional vector Xi ? [?1, 1]d . Based on Xi , it generates a Bernoulli random vector
Zi = (Zi1 , . . . , Zid ) with Zij ? {0, 1} taking the value 1 with probability (1 + Xij )/2, independently across coordinates. Machine i uses d bits to send the vector Zi ? {0, 1}d to the fusion center.
Pm
1
The fusion center then computes the average ?b = m
i=1 (2Zi ? 1). This average is unbiased, and
its expected squared error is bounded by d/m.
4
Consequences for regression
In this section, we turn to identifying the minimax rates for a pair of important estimation problems:
linear regression and probit regression.
4.1
Linear regression
We consider a distributed instantiation of linear regression with fixed design matrices. Concretely,
suppose that each of m machines has stored a fixed design matrix A(i) ? Rn?d and then observes a
response vector b(i) ? Rd from the standard linear regression model
b(i) = A(i) ? + ?(i) ,
(11)
where ?(i) ? N(0, ? 2 In?n ) is a noise vector. Our goal is to estimate unknown regression vector
? ? ? = [?1, 1]d , shared across all machines, in a distributed manner, To state our result, we
assume uniform upper and lower bounds on the eigenvalues of the rescaled design matrices, namely
0 < ?min ?
?min (A(i) )
?
and
n
i?{1,...,m}
min
?max (A(i) )
?
? ?max .
n
i?{1,...,m}
max
(12)
Corollary 2 Consider an instance of the linear regression model (11) under condition (12).
(a) Then there is a universal positive constant c such that
)
(
?2 d
m
m
mn
ind
.
M (?, P, B1:m ) ? c
,
min
,
Pm
Bi
mn
? 2 ?2max log m ?2max
i=1 min{1, d } log m
(b) Conversely, given budgets Bi ? d log(mn) for i = 1, . . . , m, there is a universal constant
c? such that
c? ? 2 d
Mind (?, P, B1:m ) ? 2
.
?min mn
5
It is a classical fact (e.g. [14]) that the minimax rate for d-dimensional linear regression scales as
d? 2 /(nm). Part (a) of Corollary 2 shows this optimal rate is attainableP
only if the budget Bi at each
m
dm
machine is of the order d/ log(m), meaning that the total budget B = i=1 Bi must grow as log
m.
Part (b) of the corollary shows that the minimax rate is achievable with budgets that match the lower
bound up to logarithmic factors.
Proof: The proof of part (b) follows from techniques of Zhang et al. [23], who show that solving
each regression problem separately and then performing a form of approximate averaging, in which
each machine uses Bi = d log(mn) bits, achieves the minimax rate up to constant prefactors.
To prove part (a), we show that solving an arbitrary Gaussian mean estimation problem can be
reduced to solving a specially constructed linear regression problem. This reduction allows us to
apply the lower bound from Theorem 2. Given ? ? ?, consider the Gaussian mean model
?2
Id?d .
X (i) = ? + w(i) , where w(i) ? N 0, 2
?max n
Each machine i has its own design matrix A(i) , and we use it to construct a response vector
?
2
b(i) ? Rn . Since ?max (A(i) / n) ? ?max , the matrix ?(i) := ? 2 In?n ? ?2? n A(i) (A(i) )? is
max
positive semidefinite. Consequently, we may form a response vector via
b(i) = A(i) X (i) + z (i) , z (i) ? N 0, ?(i) is drawn independently of w(i) .
(13)
The independence of w(i) and z (i) guarantees that b(i) ? N(A(i) ?, ? 2 In?n ), so that the pair
(b(i) , A(i) ) is faithful to the regression model (11).
Now consider any protocol ? ? Aind (B, P) that can solve any regression problem to within accuracy ?, so that E[k?b ? ?k22 ] ? ? 2 . By the previously described reduction, the protocol ? can also
solve the mean estimation problem to accuracy ?, in particular via the pair (A(i) , b(i) ) constructed
via expression (13). Combined with this reduction, the corollary thus follows from Theorem 2.
4.2
Probit regression
We now turn to the problem of binary classification, in particular considering the probit regression model. As in the previous section, each of m machines has a fixed design matrix
A(i) ? Rn?d , where A(i,k) denotes the kth row of A(i) . Machine i receives n binary responses
Z (i) = (Z (i,1) , . . . , Z (i,n) ), drawn from the conditional distribution
P(Z (i,k) = 1 | A(i,k) , ?) = ?(A(i,k) ?)
for some fixed ? ? ? = [?1, 1]d ,
(14)
where ?(?) denotes the standard normal CDF. The log-likelihood of the probit model (14) is concave [4, Exercise 3.54]. Under condition (12) on the design matrices, we have:
Corollary 3 Consider the probit model (14) under condition (12). Then
(a) There is a universal constant c such that
(
)
d
m
m
ind
M (?, P, B1:m ) ? c
min mn, 2
,
.
Pm
Bi
mn
?max log m ?2max
i=1 min{1, d } log m
(b) Conversely, given budgets Bi ? d log(mn) for i = 1, . . . , m, there is a universal constant
c? such that
d
c?
.
Mind (?, P, B1:m ) ? 2
?min mn
Proof: As in the previous case with linear regression, Zhang et al.?s study of distributed convex
optimization [23] gives part (b): each machine solves the local probit regression separately, after
which each machine sends Bi = d log(mn) bits to average its local solution.
To prove part (a), we show that linear regression problems can be solved via estimation in a specially
constructed probit model. Consider an arbitrary ? ? ?; assume we have a regression problem of the
6
form (11) with noise variance ? 2 = 1. We construct the binary responses for our probit regression
(Z (i,1) , . . . , Z (i,n) ) by
1 if b(i,k) ? 0,
(i,k)
Z
=
(15)
0 otherwise.
By construction, we have P(Z (i,k) = 1 | A(i) , ?) = ?(A(i,k) ?) as desired for our model (14). By
inspection, any protocol ? ? Aind (B, P) solving the probit regression problem provides an estimator with the same error (risk) as the original linear regression problem via the construction (15).
Corollary 2 provides the desired lower bound.
5
Proof sketches for main results
We now give an outline of the proof of each of our main results (Theorems 1 and 2), providing a
more detailed proof sketch for Proposition 2, since it displays techniques common to our arguments.
5.1
Broad outline
Most of our lower bounds follow the same basic strategy of reducing an estimation problem to
a testing problem. Following this reduction, we then develop inequalities relating the probability
of error in the test to the number of bits contained in the messages Yi sent from each machine.
Establishing these links is the most technically challenging aspect.
Our reduction from estimation to testing is somewhat more general than the classical reductions
(e.g., [22, 20]), since we do not map the original estimation problem to a strict test, but rather a
test that allows some errors. Let V denote an index set of finite cardinality, where ? ? V indexes a
family of probability distributions {P (? | ?)}??V . For each member of this family, associate with a
parameter ?? := ?(P (? | ?)) ? ?, where ? denotes the parameter space. In our proofs applicable to
d-dimensional problems, we set V = {?1, 1}d , and we index vectors ?? by ? ? V. Now, we sample
V uniformly at random from V. Conditional on V = ?, we then sample X from a distribution
PX (? | V = ?) satisfying ?? := ?(PX (? | ?)) = ??, where ? > 0 is a fixed quantity that we control.
We define dham (?, ? ? ) to be the Hamming distance between ?, ? ? ? V. This construction gives
p
k?? ? ?? ? k2 = 2? dham (?, ? ? ).
Fixing t ? R, the following lemma reduces the problem of estimating ? to finding a point ? ? V
within distance t of the random variable V . The result extends a result of Duchi and Wainwright
[8]; for completeness we provide a proof in Appendix H.
Lemma 1 Let V be uniformly sampled from V. For any estimator ?b and any t ? R, we have
sup E[k?b ? ?(P )k22 ] ? ? 2 (?t? + 1) inf P (dham (b
? , V ) > t) ,
?
b
P ?P
where the infimum ranges over all testing functions.
Lemma 1 shows that minimax lower lower bound can be derived by showing that, for some t > 0
to be chosen, it is difficult to identify V within a radius of t. The following extension of Fano?s
inequality [8] can be used to control this type of error probability:
Lemma 2 Let V ? X ? Vb be a Markov chain, where V is uniform on V. For any t ? R, we have
I(V ; X) + log 2
,
P(dham (Vb , V ) > t) ? 1 ?
log |V|
Nt
where Nt := max |{? ? ? V : dham (?, ? ? ) ? t}| is the size of the largest t-neighborhood in V.
??V
Lemma 2 allows flexibility in the application of the minimax bounds from Lemma 1. If there is a
large set V for which it is easy to control I(V ; X), whereas neighborhoods in V are relatively small
(i.e., Nt is small), then we can obtain sharp lower bounds.
7
In a distributed protocol, we have a Markov chain V ? X ? Y , where Y denotes the messages the
b ). For
different machines send. Based on the messages
an arbitrary estimator ?(Y
Y , we consider
Pt
d
d
d
t
0 ? t ? ?d/3?, we have Nt = ? =0 ? ? 2 t . Since t ? (de/t) , for t ? d/6 we have
|V|
d
2
d
d
? >
log
? d log 2 ? log 2
? d log 2 ? log(6e) ? log 2 = d log
Nt
6
6
t
21/d 6 6e
for d ? 12 (the case d < 12 can be checked directly). Thus, combining Lemma 1 and Lemma 2
b we find that for t = ?d/6?,
(using the Markov chain V ? X ? Y ? ?),
i
h
I(Y ; V ) + log 2
2
2
b
.
(16)
sup E k?(Y ) ? ?(P )k2 ? ? (?d/6? + 1) 1 ?
d/6
P ?P
With inequality (16) in hand, it then remains to upper bound the mutual information I(Y ; V ), which
is the main technical content of each of our results.
5.2
Proof sketch of Proposition 2
Following the general outline of the previous section, let V be uniform on V = {?1, 1}d . Letting
0 < ? ? 1 be a positive number, for i ? [m] we independently sample X (i) ? Rd according to
(i)
P (Xj = ?j | V = ?) =
1+?
1??
(i)
and P (Xj = ??j | V = ?) =
.
2
2
(17)
Under this distribution, we can give a sharp characterization of the mutual information I(V ; Yi ). In
particular, we show in Appendix B that under the sampling distribution (17), there exists a numerical
constant c such that
I(V ; Yi ) ? c? 2 I(X (i) ; Yi ).
(18)
Since the random variable X takes discrete values, we have
I(X (i) ; Yi ) ? min{H(X (i) ), H(Yi )} ? min{d, H(Yi )}.
Since the expected length of message Yi is bounded by Bi , Shannon?s source coding theorem [6]
implies that H(Yi ) ? Bi . In particular, inequality (18) establishes a link between the initial distribution (17) and the number of bits used to transmit information, that is,
I(V ; Yi ) ? c? 2 min{d, Bi }.
(19)
We can now apply the quantitative data processing inequality
Pm (19) in the bound (16). By the independence of the communication scheme, I(V ; Y1:m ) ? i=1 I(V ; Yi ), and thus inequality (16)
simplifies to
Pm
c? 2 i=1 min{d, Bi } + log 2
ind
2
M (?, P, B1:m ) ? ? (?d/6? + 1) 1 ?
.
d/6
Assuming d ? 9, so 1 ? 6 log 2/d > 1/2, we see that choosing ? 2 = min{1, 24c Pm dmin{Bi ,d} }
i=1
implies
? 2 (?d/6? + 1)
?d/6? + 1
d
ind
Pm
M (?, P, B1:m ) ?
=
min 1,
.
4
4
24c i=1 min{Bi , d}
Rearranging slightly gives the statement of the proposition.
Acknowledgments
We thank the anonymous reviewers for their helpful feedback and comments. JCD was supported
by a Facebook Graduate Fellowship. Our work was supported in part by the U.S. Army Research
Laboratory, U.S. Army Research Office under grant number W911NF-11-1-0391, and Office of
Naval Research MURI grant N00014-11-1-0688.
8
References
[1] H. Abelson. Lower bounds on information transfer in distributed computations. Journal of the
ACM, 27(2):384?392, 1980.
[2] M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In Proceedings of the Twenty Fifth Annual Conference on Computational
Learning Theory, 2012. URL http://arxiv.org/abs/1204.3514.
[3] K. Ball. An elementary introduction to modern convex geometry. In S. Levy, editor, Flavors
of Geometry, pages 1?58. MSRI Publications, 1997.
[4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in
Machine Learning, 3(1), 2011.
[6] T. M. Cover and J. A. Thomas. Elements of Information Theory, Second Edition. Wiley, 2006.
[7] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction
using mini-batches. Journal of Machine Learning Research, 13:165?202, 2012.
[8] J. C. Duchi and M. J. Wainwright. Distance-based and continuum fano inequalities with applications to statistical estimation. arXiv [cs.IT], to appear, 2013.
[9] J. C. Duchi, A. Agarwal, and M. J. Wainwright. Dual averaging for distributed optimization:
convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):
592?606, 2012.
[10] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates.
arXiv:1302.3203 [math.ST], 2013. URL http://arXiv.org/abs/1302.3203.
[11] S. Han and S. Amari. Statistical inference under multiterminal data compression. IEEE Transactions on Information Theory, 44(6):2300?2324, 1998.
[12] I. A. Ibragimov and R. Z. Has?minskii. Statistical Estimation: Asymptotic Theory. SpringerVerlag, 1981.
[13] E. Kushilevitz and N. Nisan. Communication Complexity. Cambridge University Press, 1997.
[14] E. L. Lehmann and G. Casella. Theory of Point Estimation, Second Edition. Springer, 1998.
[15] Z.-Q. Luo. Universal decentralized estimation in a bandwidth constrained sensor network.
IEEE Transactions on Information Theory, 51(6):2210?2219, 2005.
[16] Z.-Q. Luo and J. N. Tsitsiklis. Data fusion with minimal communication. IEEE Transactions
on Information Theory, 40(5):1551?1563, 1994.
[17] R. McDonald, K. Hall, and G. Mann. Distributed training strategies for the structured perceptron. In North American Chapter of the Association for Computational Linguistics (NAACL),
2010.
[18] J. N. Tsitsiklis. Decentralized detection. In Advances in Signal Processing, Vol. 2, pages
297?344. JAI Press, 1993.
[19] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009.
[20] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence.
Annals of Statistics, 27(5):1564?1599, 1999.
[21] A. C.-C. Yao. Some complexity questions related to distributive computing (preliminary report). In Proceedings of the Eleventh Annual ACM Symposium on the Theory of Computing,
pages 209?213. ACM, 1979.
[22] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423?435.
Springer-Verlag, 1997.
[23] Y. Zhang, J. C. Duchi, and M. J. Wainwright. Communication-efficient algorithms for statistical optimization. In Advances in Neural Information Processing Systems 26, 2012.
9
| 4902 |@word version:1 compression:1 achievable:5 logmm:1 nd:3 dekel:1 reduction:6 initial:1 zij:1 past:3 wainwrig:1 current:1 comparing:1 nt:5 luo:3 chu:1 must:7 john:1 numerical:4 inspection:1 minskii:1 characterization:2 quantized:3 node:1 location:5 provides:2 completeness:1 org:2 math:1 zhang:3 along:1 constructed:3 direct:1 symposium:1 shorthand:1 prove:3 eleventh:1 manner:1 introduce:2 privacy:2 inter:2 expected:2 rapid:1 multi:3 little:1 considering:2 increasing:2 cardinality:1 begin:3 estimating:3 underlying:2 moreover:2 notation:1 bounded:6 what:2 finding:1 jduchi:1 guarantee:1 berkeley:3 every:1 quantitative:1 concave:1 growth:1 interactive:12 k2:3 control:5 grant:2 zhang1:1 appear:1 positive:3 engineering:1 local:8 understood:1 limit:2 consequence:2 id:2 establishing:2 approximately:1 conversely:3 challenging:1 yuczhang:1 collect:1 limited:2 bi:20 range:1 graduate:1 faithful:1 acknowledgment:1 testing:3 communicated:4 procedure:5 universal:10 convenient:1 boyd:2 pre:1 risk:9 measurable:3 map:1 reviewer:1 yt:6 center:8 send:6 straightforward:1 independently:4 convex:3 bachrach:1 simplicity:1 identifying:1 estimator:12 kushilevitz:1 vandenberghe:1 coordinate:2 limiting:1 transmit:1 construction:3 suppose:7 pt:1 shamir:1 annals:1 us:2 associate:1 trend:1 element:1 satisfying:1 muri:1 ep:6 electrical:1 solved:1 rescaled:1 observes:1 substantial:1 pd:3 complexity:5 cam:2 depend:3 tight:1 solving:4 technically:1 efficiency:1 basis:1 packing:2 easily:1 chip:2 various:2 chapter:1 separated:2 distinct:1 neighborhood:2 choosing:1 solve:2 say:1 otherwise:1 amari:1 statistic:2 think:1 jointly:1 online:1 sequence:1 eigenvalue:1 interaction:1 product:1 blackboard:1 combining:1 flexibility:1 achieve:8 description:1 exploiting:1 convergence:2 object:1 develop:1 fixing:1 solves:1 auxiliary:1 c:1 indicate:1 implies:3 direction:1 radius:1 correct:1 public:1 mann:1 anonymous:1 preliminary:1 proposition:8 elementary:1 duchi1:1 extension:1 hold:2 considered:1 hall:1 normal:6 algorithmic:1 achieves:1 continuum:1 estimation:31 applicable:1 lucien:1 largest:1 establishes:1 sensor:1 gaussian:2 rather:1 office:2 publication:1 corollary:8 encode:1 derived:1 focus:3 naval:1 bernoulli:1 likelihood:1 contrast:3 helpful:1 inference:1 interested:1 issue:1 among:1 classification:1 dual:1 denoted:1 constrained:1 special:1 mutual:2 construct:3 sampling:1 broad:1 yu:1 nearly:3 report:1 modern:2 individual:2 festschrift:1 geometry:2 n1:1 ab:2 detection:2 centralized:13 message:24 interest:1 unimprovable:1 semidefinite:1 chain:3 yuchen:1 exchanged:1 desired:2 theoretical:1 minimal:4 instance:3 cover:1 w911nf:1 cost:3 subset:5 uniform:7 wonder:1 too:1 characterize:1 stored:3 answer:1 eec:1 combined:1 st:1 fundamental:1 michael:1 connecting:1 ym:1 analogously:1 yao:1 squared:5 central:3 nm:1 unavoidable:1 broadcast:1 american:1 leading:1 li:3 de:1 coding:1 north:1 satisfy:1 depends:1 nisan:1 try:1 sup:7 characterizes:1 contribution:1 accuracy:3 variance:1 who:2 yield:2 identify:2 processor:4 randomness:1 casella:1 whenever:1 checked:1 facebook:1 dm:3 proof:9 hamming:1 sampled:1 dataset:2 dimensionality:1 formalize:1 zid:1 follow:1 response:5 bdi:1 stage:1 sketch:3 receives:8 hand:1 infimum:3 quality:1 reveal:1 effect:1 k22:2 naacl:1 unbiased:1 multiplier:1 assigned:2 read:1 alternating:1 laboratory:1 ind:5 round:4 presenting:1 outline:3 theoretic:2 mcdonald:1 duchi:5 balcan:2 meaning:2 parikh:1 common:1 association:1 he:1 jcd:1 relating:1 significant:1 cambridge:2 rd:3 automatic:1 pm:11 fano:3 access:2 han:1 prefactors:2 own:2 recent:1 inf:3 certain:1 n00014:1 verlag:1 inequality:7 binary:3 yi:12 captured:1 minimum:1 transmitted:1 somewhat:2 impose:1 signal:1 multiple:1 full:2 reduces:1 technical:1 match:2 characterized:1 calculation:1 determination:1 post:1 prediction:1 regression:23 basic:1 expectation:1 metric:2 arxiv:4 gilad:1 achieved:1 agarwal:1 whereas:1 fellowship:1 separately:2 fine:1 interval:3 grow:1 source:1 sends:4 specially:2 probably:1 strict:1 comment:1 sent:5 member:2 jordan:2 leverage:1 yang:1 intermediate:1 easy:1 ture:1 independence:2 finish:1 zi:3 xj:2 architecture:1 bandwidth:2 simplifies:1 bottleneck:1 whether:1 expression:1 url:2 passing:1 dramatically:1 useful:2 generally:1 detailed:1 involve:2 ibragimov:1 amount:4 nonparametric:1 tsybakov:1 reduced:1 http:2 xij:1 canonical:1 msri:1 per:2 discrete:1 vol:1 multiterminal:1 blum:1 drawn:2 inverse:1 lehmann:1 fueled:1 place:1 family:14 throughout:2 reasonable:1 extends:1 appendix:2 scaling:5 vb:2 comparable:1 bit:18 bound:33 pay:1 distinguish:1 display:1 annual:2 constraint:2 precisely:1 generates:1 aspect:1 argument:1 min:22 performing:1 martin:1 relatively:3 px:2 department:2 structured:1 according:2 ball:1 smaller:1 across:2 slightly:1 taken:3 ln:1 previously:1 remains:1 turn:4 mind:7 letting:1 generalizes:1 operation:1 decentralized:4 apply:2 barron:1 generic:1 batch:1 original:2 thomas:1 denotes:4 linguistics:1 log2:1 establish:2 classical:4 question:2 quantity:1 strategy:2 mnd:1 dependence:4 kth:1 distance:3 link:2 thank:1 capacity:1 distributive:1 considers:2 enforcing:1 assuming:1 length:4 index:3 mini:1 providing:2 setup:1 difficult:1 potentially:1 statement:3 stated:1 design:7 unknown:5 perform:1 allowing:1 upper:3 dmin:1 observation:5 twenty:1 datasets:2 markov:3 finite:1 situation:1 defining:1 communication:25 zi1:1 y1:4 rn:3 mansour:1 arbitrary:4 sharp:7 peleato:1 namely:2 required:7 pair:3 eckstein:1 california:1 including:2 max:13 wainwright:5 power:1 natural:3 mn:24 minimax:26 scheme:4 naive:1 jai:1 literature:2 geometric:1 asymptotic:1 probit:9 interesting:1 limitation:1 versus:1 granular:1 foundation:1 xiao:1 editor:1 row:1 course:1 supported:4 tsitsiklis:2 formal:1 perceptron:1 taking:1 fifth:1 departing:1 distributed:20 feedback:1 dimension:1 rich:1 computes:2 concretely:1 collection:2 bm:1 transaction:4 approximate:1 informationtheoretic:1 ignore:1 global:1 instantiation:1 b1:11 conclude:1 quantizes:1 xi:4 continuous:1 transfer:1 ca:1 rearranging:1 obtaining:1 upstream:1 protocol:29 main:5 noise:2 edition:2 n2:1 wiley:1 precision:1 wish:1 exercise:1 lie:1 communicates:1 levy:1 theorem:7 pac:1 showing:3 fusion:10 essential:1 exists:3 budget:18 illustrates:1 flavor:1 entropy:4 lt:2 logarithmic:6 univariate:2 army:2 contained:1 springer:3 corresponds:1 acm:3 cdf:1 assouad:1 conditional:2 goal:3 viewed:1 sized:1 consequently:4 careful:1 shared:1 content:1 jordan1:1 springerverlag:1 reducing:1 uniformly:2 averaging:2 lemma:8 total:8 shannon:1 formally:1 wainwright1:1 |
4,313 | 4,903 | PAC-Bayes-Empirical-Bernstein Inequality
Yevgeny Seldin
Queensland University of Technology
UC Berkeley
[email protected]
Ilya Tolstikhin
Computing Centre
Russian Academy of Sciences
[email protected]
Abstract
We present a PAC-Bayes-Empirical-Bernstein inequality. The inequality is based
on a combination of the PAC-Bayesian bounding technique with an Empirical
Bernstein bound. We show that when the empirical variance is significantly
smaller than the empirical loss the PAC-Bayes-Empirical-Bernstein inequality is
significantly tighter than the PAC-Bayes-kl inequality of Seeger (2002) and otherwise it is comparable. Our theoretical analysis is confirmed empirically on a
synthetic example and several UCI datasets. The PAC-Bayes-Empirical-Bernstein
inequality is an interesting example of an application of the PAC-Bayesian bounding technique to self-bounding functions.
1
Introduction
PAC-Bayesian analysis is a general and powerful tool for data-dependent analysis in machine learning. By now it has been applied in such diverse areas as supervised learning [1?4], unsupervised
learning [4, 5], and reinforcement learning [6]. PAC-Bayesian analysis combines the best aspects
of PAC learning and Bayesian learning: (1) it provides strict generalization guarantees (like VCtheory), (2) it is flexible and allows the incorporation of prior knowledge (like Bayesian learning),
and (3) it provides data-dependent generalization guarantees (akin to Radamacher complexities).
PAC-Bayesian analysis provides concentration inequalities for the divergence between expected and
empirical loss of randomized prediction rules. For a hypothesis space H a randomized prediction
rule associated with a distribution ? over H operates by picking a hypothesis at random according
to ? from H each time it has to make a prediction. If ? is a delta-distribution we recover classical
prediction rules that pick a single hypothesis h ? H. Otherwise, the prediction strategy resembles
Bayesian prediction from the posterior distribution, with a distinction that ? does not have to be the
Bayes posterior. Importantly, many of PAC-Bayesian inequalities hold for all posterior distributions
? simultaneously (with high probability over a random draw of a training set). Therefore, PACBayesian bounds can be used in two ways. Ideally, we prefer to derive new algorithms that find the
posterior distribution ? that minimizes the PAC-Bayesian bound on the expected loss. However, we
can also use PAC-Bayesian bounds in order to estimate the expected loss of posterior distributions ?
that were found by other algorithms, such as empirical risk minimization, regularized empirical risk
minimization, Bayesian posteriors, and so forth. In such applications PAC-Bayesian bounds can be
used to provide generalization guarantees for other methods and can be applied as a substitute for
cross-validation in paratemer tuning (since the bounds hold for all posterior distributions ? simultaneously, we can apply the bounds to test multiple posterior distributions ? without suffering from
over-fitting, in contrast with extensive applications of cross-validation).
There are two forms of PAC-Bayesian inequalities that are currently known to be the tightest depending on a situation. One is the PAC-Bayes-kl inequality of Seeger [7] and the other is the PACBayes-Bernstein inequality of Seldin et. al. [8]. However, the PAC-Bayes-Bernstein inequality is
expressed in terms of the true expected variance, which is rarely accessible in practice. Therefore, in
order to apply the PAC-Bayes-Bernstein inequality we need an upper bound on the expected variance
1
(or, more precisely, on the average of the expected variances of losses of each hypothesis h ? H
weighted according to the randomized prediction rule ?). If the loss is bounded in the [0, 1] interval
the expected variance can be upper bounded by the expected loss and this bound can be used to
recover the PAC-Bayes-kl inequality from the PAC-Bayes-Bernstein inequality (with slightly suboptimal constants and suboptimal behavior for small sample sizes). In fact, for the binary loss this
result cannot be significantly improved (see Section 3). However, when the loss is not binary it may
be possible to obtain a tighter bound on the variance, which will lead to a tighter bound on the loss
than the PAC-Bayes-kl inequality. For example, in Seldin et. al. [6] a deterministic upper bound
on the variance of importance-weighted sampling combined with PAC-Bayes-Bernstein inequality
yielded an order of magnitude improvement relative to application of PAC-Bayes-kl inequality to
the same problem. We note that the bound on the variance used by Seldin et. al. [6] depends on
specific properties of importance-weighted sampling and does not apply to other problems.
In this work we derive the PAC-Bayes-Empirical-Bernstein bound, in which the expected average
variance of the loss weighted by ? is replaced by the weighted average of the empirical variance
of the loss. Bounding the expected variance by the empirical variance is generally tighter than
bounding it by the empirical loss. Therefore, the PAC-Bayes-Empirical-Bernstein bound is generally
tighter than the PAC-Bayes-kl bound, although the exact comparison also depends on the divergence
between the posterior and the prior and the sample size. In Section 5 we provide an empirical
comparison of the two bounds on several synthetic and UCI datasets.
The PAC-Bayes-Empirical-Bernstein bound is derived in two steps. In the first step we combine the
PAC-Bayesian bounding technique with the Empirical Bernstein inequality [9] and derive a PACBayesian bound on the variance. The PAC-Bayesian bound on the variance bounds the divergence
between averages [weighted by ?] of expected and empirical variances of the losses of hypotheses
in H and holds with high probability for all averaging distributions ? simultaneously. In the second
step the PAC-Bayesian bound on the variance is substituted into the PAC-Bayes-Bernstein inequality
yielding the PAC-Bayes-Empirical-Bernstein bound.
The remainder of the paper is organized as follows. We start with some formal definitions and
review the major PAC-Bayesian bounds in Section 2, provide our main results in Section 3 and their
proof sketches in Section 4, and finish with experiments in Section 5 and conclusions in Section 6.
Detailed proofs are provided in the supplementary material.
2
Problem Setting and Background
We start with providing the problem setting and then give some background on PAC-Bayesian analysis.
2.1
Notations and Definitions
We consider supervised learning setting with an input space X , an output space Y, an i.i.d. training
sample S = {(Xi , Yi )}ni=1 drawn according to an unknown distribution D on the product-space
X ? Y, a loss function ` : Y 2 ? [0, 1], and a hypothesis class H. The elements of H are functions
h : X ? Y from the input space to the output space. We use `h (X, Y ) = `(Y, h(X)) to denote the
loss of a hypothesis h on a pair (X, Y ).
For a fixed hypothesis h ? H denote its expected loss by L(h) = E(X,Y )?D [`h (X, Y )],
Pn
the empirical loss Ln (h) = n1 h i=1 `h (Xi , Yi ), and the variance of
the loss V(h) =
2 i
Var(X,Y )?D [`h (X, Y )] = E(X,Y )?D `h (X, Y ) ? E(X,Y )?D [`h (X, Y )] .
We define Gibbs regression rule G? associated with a distribution ? over H in the following way:
for each point X Gibbs regression rule draws a hypothesis h according to ? and applies it to X. The
expected loss of Gibbs regression rule is denoted by L(G? ) = Eh??
h [L(h)]i and the empirical loss is
?(h)
to denote the Kullbackdenoted by Ln (G? ) = Eh?? [Ln (h)]. We use KL(?k?) = Eh?? ln ?(h)
Leibler divergence between two probability distributions [10]. For two Bernoulli distributions with
biases p and q we use kl(qkp) as a shorthand for KL([q, 1 ? q]k[p, 1 ? p]). In the sequel we use
E? [?] as a shorthand for Eh?? [?].
2
2.2
PAC-Bayes-kl bound
Before presenting our results we review several existing PAC-Bayesian bounds. The result in Theorem 1 was presented by Maurer [11, Theorem 5] and is one of the tightest known concentration
bounds on the expected loss of Gibbs regression rule. Theorem 1 generalizes (and slightly tightens)
PAC-Bayes-kl inequality of Seeger [7, Theorem 1] from binary to arbitrary loss functions bounded
in the [0, 1] interval.
Theorem 1. For any fixed probability distribution ? over H, for any n ? 8 and ? > 0, with
probability greater than 1 ? ? over a random draw of a sample S, for all distributions ? over H
simultaneously:
?
KL(?k?) + ln 2 ? n
kl Ln (G? )kL(G? ) ?
.
(1)
n
p
kl(qkp)/2, Theorem 1 directly implies (up to minor
Since by Pinsker?s inequality |p ? q| ?
factors) the more explicit PAC-Bayesian bound of McAllester [12]:
s
?
KL(?k?) + ln 2 ? n
L(G? ) ? Ln (G? ) +
,
(2)
2n
which holds with probability greater than 1 ? ? for all ? simultaneously. We note that kl is easy
to invert numerically and for small values of Ln (G? ) (less than 1/4) the implicit bound in (1)
is significantly tighter than the explicit bound in (2). This can be seen from another
p relaxation
suggested by McAllester [2], which follows from (1) by the inequality p ? q + 2qkl(qkp) +
2kl(qkp) for p < q:
v
u
?
?
2 n
u 2L (G ) KL(?k?) + ln 2 n
2
KL(?k?)
+
ln
t n ?
?
?
+
.
(3)
L(G? ) ? Ln (G? ) +
n
n
From inequality (3) we clearly see that inequality (1) achieves
?fast convergence rate? or, in other
?
words, when L(G
? ? ) is zero (or small compared to 1/ n) the bound converges at the rate of 1/n
rather than 1/ n as a function of n.
2.3
PAC-Bayes-Bernstein Bound
Seldin et. al. [8] introduced a general technique for combining PAC-Bayesian analysis with concentration of measure inequalities and derived the PAC-Bayes-Bernstein bound cited below. (The
PAC-Bayes-Bernstein bound of Seldin et. al. holds for martingale sequences, but for simplicity in
this paper we restrict ourselves to i.i.d. variables.)
Theorem 2. For any fixed distribution ? over H, for any ?1 > 0, and for any fixed c1 > 1, with
probability greater than 1 ? ?1 (over a draw of S) we have
v
u
u (e ? 2)E [V(h)] KL(?k?) + ln ?1
?
t
?1
L(G? ) ? Ln (G? ) + (1 + c1 )
(4)
n
simultaneously for all distributions ? over H that satisfy
s
KL(?k?) + ln ??11
?
? n,
(e ? 2)E? [V(h)]
where
&
?1 =
1
ln
ln c1
s
(e ? 2)n
4 ln(1/?1 )
!'
+ 1,
and for all other ? we have:
L(G? ) ? Ln (G? ) + 2
KL(?k?) + ln ??11
.
n
Furthermore, the result holds if E? [V(h)] is replaced by an upper bound V? (?), as long as
E? [V(h)] ? V? (?) ? 41 for all ?.
3
A few comments on Theorem 2 are in place here. First, we note that Seldin et. al. worked with
cumulative losses and variances, whereas we work with normalized losses and variances, which
means that their losses and variances differ by a multiplicative factor of n from our definitions.
Second, we note that the statement on the possibility of replacing E? [V(h)] by an upper bound is
not part of [8, Theorem 8], but it is mentioned and analyzed explicitly in the text. The requirement
that V? (?) ? 14 is not mentioned explicitly, but it follows directly from the necessity to preserve the
relevant range of the trade-off parameter ? in the proof of the theorem. Since 41 is a trivial upper
bound on the variance of a random variable bounded in the [0, 1] interval, the requirement is not a
limitation. Finally, we note that since we are working with ?one-sided? variables (namely, the loss is
bounded in the [0, 1] interval rather than ?two-sided? [?1, 1] interval, which was considered in [8])
the variance is bounded by 14 (rather than 1), which leads to a slight improvement in the value of ?1 .
Since in reality we rarely have access to the expected variance E? [V(h)] the tightness of Theorem
2 entirely depends on the tightness of the upper bound V? (?). If we use the trivial upper bound
E? [V(h)] ? 14 the result is roughly equivalent to (2), which is inferior to Theorem 1. Design of a
tighter upper bound on E? [V(h)] that holds for all ? simultaneously is the subject of the following
section.
3
Main Results
The key result of our paper is a PAC-Bayesian bound on the average expected variance E? [V(h)]
given in terms of the average empirical variance E? [Vn (h)] = Eh?? [Vn (h)], where
Vn (h) =
n
2
1 X
`h (Xi , Yi ) ? Ln (h)
n ? 1 i=1
(5)
is an unbiased estimate of the variance V(h). The bound is given in Theorem 3 and it holds with
high probability for all distributions ? simultaneously. Substitution of this bound into Theorem 2
yields the PAC-Bayes-Empirical-Bernstein inequality given in Theorem 4. Thus, the PAC-BayesEmpirical-Bernstein inequality is based on two subsequent applications of the PAC-Bayesian bounding technique.
3.1
PAC-Bayesian bound on the variance
Theorem 3 is based on an application of the PAC-Bayesian bounding technique to the difference
E? [V(h)] ? E? [Vn (h)]. We note that Vn (h) is a second-order U-statistics [13] and Theorem 3 provides an interesting example of combining PAC-Bayesian analysis with concentration inequalities
for self-bounding functions.
Theorem 3. For any fixed distribution ? over H, any c2 > 1 and ?2 > 0, with probability greater
than 1 ? ?2 over a draw of S, for all distributions ? over H simultaneously:
v
u
?2
u E? [Vn (h)] KL(?k?) + ln ?2
2c
KL(?k?)
+
ln
2
t
?2
?2
E? [V(h)] ? E? [Vn (h)] + (1 + c2 )
+
,
2(n ? 1)
n?1
(6)
where
s
&
!'
1
1
n?1
1
?2 =
ln
+1+
.
ln c2
2 ln(1/?2 )
2
Note that (6) closely resembles the explicit bound on L(G? ) in (3). If the empirical
variance Vn (h)
?
is close to zero the impact of the second term of the bound (that scales with 1/ n) is relatively small
and we obtain ?fast convergence rate? of E? [Vn (h)] to E? [V(h)]. Finally, we note that the impact
of c2 on ln ?2 is relatively small and so c2 can be taken very close to 1.
3.2
PAC-Bayes-Empirical-Bernstein bound
Theorem 3 controls the average variance E? [V(h)] for all posterior distributions ? simultaneously.
By taking ?1 = ?2 = 2? we have the claims of Theorems 2 and 3 holding simultaneously with
4
probability greater than 1 ? ?. Substitution of the bound on E? [V(h)] from Theorem 3 into the
PAC-Bayes-Bernstein inequality in Theorem 2 yields the main result of our paper, the PAC-BayesEmpirical-Bernstein inequality, that controls the loss of Gibbs regression rule E? [L(h)] for all posterior distributions ? simultaneously.
?
?
Theorem 4. Let
Vn (?) denote the right hand side of (6) (with ?2 = 2 ) and let Vn (?) =
1
min Vn (?), 4 . For any fixed distribution ? over H, for any ? > 0, and for any c1 , c2 > 1,
with probability greater than 1 ? ? (over a draw of S) we have:
s
(e ? 2)V?n (?) KL(?k?) + ln 2?? 1
L(G? ) ? Ln (G? ) + (1 + c1 )
(7)
n
simultaneously for all distributions ? over H that satisfy
s
?
KL(?k?) + ln 2?? 1
? n,
(e ? 2)V?n (?)
where ?1 was defined in Theorem 2 (with ?1 = 2? ), and for all other ? we have:
L(G? ) ? Ln (G? ) + 2
KL(?k?) + ln 2?? 1
.
n
Note that all the quantities in Theorem 4 are computable based on the sample.
?
As we can see immediately by comparing the O(1/ n) term in PAC-Bayes-Empirical-Bernstein
inequality (PB-EB for brevity) with the corresponding term in the relaxed version of the PAC-Bayeskl inequality (PB-kl for brevity) in equation (3), the PB-EB inequality can potentially be tighter
when E? [Vn (h)] ? (1/(2(e ? 2)))Ln (G? ) ? 0.7Ln (G? ). We also note that when the loss is
bounded in the [0,1] interval we have Vn (h) ? (n/(n ? 1))Ln (h) (since `h (X, Y )2 ? `h (X, Y )).
Therefore, the PB-EB bound is never much worse than the PB-kl bound and if the empirical variance
is small compared to the empirical loss it can be much tighter. We note that for the binary loss
(`(y, y 0 ) ? {0, 1}) we have V(h) = L(h)(1 ? L(h)) and in this case the empirical variance cannot
be significantly smaller than the empirical loss and PB-EB does not provide an advantage over
PB-kl. We also note that the unrelaxed version of the PB-kl inequality in equation (1) has better
behavior for very small sample sizes and in such cases PB-kl can be tighter than PB-EB even when
the empirical variance is small. To summarize the discussion, when E? [Vn (h)] ? 0.7Ln (G? ) the
PB-EB inequality can be significantly tighter than the PB-kl bound and otherwise it is comparable
(except for very small sample sizes). In Section 5 we provide a more detailed numerical comparison
of the two inequalities.
4
Proofs
In this section we present a sketch of a proof of Theorem 3 and a proof of Theorem 4. Full details
of the proof of Theorem 3 are provided in the supplementary material. The proof of Theorem 3 is
based on the following lemma, which is at the base of all PAC-Bayesian theorems. (Since we could
not find a reference, where the lemma is stated explicitly its proof is provided in the supplementary
material.)
n
Lemma 1. For any function fn : H ? (X ? Y) ? R and for any distribution ? over H, such
that ? is independent of S, with probability greater than 1 ? ? over a random draw of S, for all
distributions ? over H simultaneously:
h
h
ii
0
1
E? [fn (h, S)] ? KL(?k?) + ln + ln E? ES 0 ?Dn efn (h,S ) .
(8)
?
The smart part is to choose fn (h, S) so that we get the quantities of interest on the left hand side
of (8) and at the same time are able to bound the last term on the right hand side of (8). Bounding
of the moment generating function (the last term in (8)) is usually done by involving some known
concentration of measure results. In the proof of Theorem 3 we use the fact that nVn (h) satisfies
the self-bounding property [14]. Specifically, for any ? > 0:
h
i
?2 n 2
(9)
ES?Dn e?(nV(h)?nVn (h))? 2 n?1 V(h) ? 1
5
0.001
1
0.001 0.01 0.1
Average empirical loss
5
0.
1
0.01
1.5
1
0.001
0.5
1
1.5
2
1
2
0.01
2.5
1
0.1
2
Average sample variance
1
2
2
Average sample variance
2.5
0.1
0.5
0.001 0.01 0.1
Average empirical loss
(a) n = 1000
(b) n = 4000
Figure 1: The Ratio of the gap between PB-EB and Ln (G? ) to the gap between PB-kl and Ln (G? )
for different values of n, E? [Vn (h)], and Ln (G? ). PB-EB is tighter below the dashed line with label
1. The axes of the graphs are in log scale.
2
n2
(see, for example, [9, Theorem 10]). We take fn (h, S) = ? nV(h) ? nVn (h) ? ?2 n?1
V(h)
and substitute fn and the bound on its moment generating function in (9) into (8). To complete the
proof it is left to optimize the bound with respect to ?. Since it is impossible to minimize the bound
simultaneously for all ? with a single value of ?, we follow the technique suggested by Seldin et. al.
and take a grid of ?-s in a form of a geometric progression and apply a union bound over this grid.
Then, for each ? we pick a value of ? from the grid, which is the closest to the value of ? that
minimizes the bound for the corresponding ?. (The approximation of the optimal ? by the closest ?
from the grid is behind the factor c2 in the bound and the ln ?2 factor is the result of the union bound
over the grid of ?-s.) Technical details of the derivation are provided in the supplementary material.
Proof of Theorem 4. By our choice of ?1 = ?2 = 2? the upper bounds of Theorems 2 and 3 hold
simultaneously with probability greater than 1 ? ?. Therefore, with probability greater than 1 ? ?2
we have E? [V(h)] ? V?n (h) ? 41 and the result follows by Theorem 2.
5
Experiments
Before presenting the experiments we present a general comparison of the behavior of the PB-EB
and PB-kl bounds as a function of Ln (G? ), E? [Vn (h)], and n. In Figure 1.a and 1.b we examine
the ratio of the complexity parts of the two bounds
PB-EB ? Ln (G? )
,
PB-kl ? Ln (G? )
where PB-EB is used to denote the value of the PB-EB bound in equation (7) and PB-kl is used
to denote the value of the PB-kl bound in equation (1). The ratio is presented in the Ln (G? ) by
E? [Vn (h)] plane for two values of n. In the illustrative comparison we took KL(?k?) = 18 and in
all the experiments presented in this section we take c1 = c2 = 1.15 and ? = 0.05. As we wrote
in the discussion of Theorem 4, PB-EB is never much worse than PB-kl and when E? [Vn (h)]
Ln (G? ) it can be significantly tighter. In the illustrative comparison in Figure 1, in the worst case
the ratio is slightly above 2.5 and in the best case it is slightly above 0.3. We note that as the sample
size grows the worst case ratio decreases (asymptotically down to 1.2) and the improvement of the
best case ratio is unlimited.
As we already said, the advantage of the PB-EB inequality over the PB-kl inequality is most prominent in regression (for classification with zero-one loss it is roughly comparable to PB-kl). Below
we provide regression experiments with L1 loss on synthetic data and three datasets from the UCI
repository [15]. We use the PB-EB and PB-kl bounds to bound the loss of a regularized empirical
6
risk minimization algorithm. In all our experiments the inputs Xi lie in a d-dimensional unit ball
centered at the origin (kXi k2 ? 1) and the outputs Y take values in [?0.5, 0.5]. The hypothesis
class HW is defined as
n
o
HW = hw (X) = hw, Xi : w ? Rd , kwk2 ? 0.5 .
This construction ensures that the L1 regression loss `(y, y 0 ) = |y ? y 0 | is bounded in the [0, 1]
?1
interval. We use uniform prior distribution over HW defined by ?(w) = V (1/2, d)
, where
V (r, d) is the volume of a d-dimensional ball with radius r. The posterior distribution ?w
is
taken
to
?
? where
be a uniform distribution on a d-dimensional ball of radius centered at the weight vector w,
? is the solution of the following minimization problem:
w
n
1X
? = arg min
w
|Yi ? hw, Xi i| + ?? kwk22 .
(10)
w n
i=1
Note that (10) is a quadratic program and can be solved by various numerical solvers (we used
Matlab quadprog). The role of the regularization parameter ?? kwk22 is to ensure that the posterior
distribution is supported by HW . We use binary search in order to find the minimal (non-negative)
? is
?? , such that the posterior ?w
? is supported by HW (meaning that the ball of radius around w
within the ball of radius 0.5 around the origin). In all the experiments below we used = 0.05.
5.1
Synthetic data
Our synthetic datasets are produced as follows. We take inputs X1 , . . . , Xn uniformly distributed in
a d-dimensional unit ball centered at the origin. Then we define
Yi = ?0 (50 ? hw0 , Xi i) + i
d
with weight vector w0 ? R , centred sigmoid function ?0 (z) = 1+e1?z ? 0.5 which takes values in
[?0.5, 0.5], and noise i independent of Xi and uniformly distributed in [?ai , ai ] with
min 0.1, 0.5 ? ?0 (50 ? hw0 , Xi i) , for ?0 (50 ? hw0 , Xi i) ? 0;
ai =
min 0.1, 0.5 + ?0 (50 ? hw0 , Xi i) , for ?0 (50 ? hw0 , Xi i) < 0.
This design ensures that Yi ? [?0.5, 0.5]. The sigmoid function creates a mismatch between the
data generating distribution and the linear hypothesis class. Together with relatively small level of
the noise (i ? 0.1) this results in small empirical variance of the loss Vn (h) and medium to high
empirical loss Ln (h). Let us denote the j-th coordinate of a vector u ? Rd by uj and the number
of nonzero coordinates of u by kuk0 . We choose the weight vector w0 to have only a few nonzero
coordinates and consider two settings. In the first setting d ? {2, 5}, kw0 k0 = 2, w10 = 0.12, and
w20 = ?0.04 and in the second setting d ? {3, 6}, kw0 k0 = 3, w10 = ?0.08, w20 = 0.05, and
w30 = 0.2.
For each sample size ranging from 300 to 4000 we averaged the bounds over 10 randomly generated
datasets. The results are presented in Figure 2. We see that except for very small sample sizes
(n < 1000) the PB-EB bound outperforms the PB-kl bound. Inferior performance for very small
sample sizes is a result of domination of the O(1/n) term in the PB-EB bound (7). As soon as n
gets large enough this term significantly decreases and PB-EB dominates PB-kl.
5.2
UCI datasets
We compare our PAC-Bayes-Empirical-Bernstein inequality (7) with the PAC-Bayes-kl inequality (1) on three UCI regression datasets: Wine Quality, Parkinsons Telemonitoring, and Concrete
Compressive Strength. For each dataset we centred and normalised both outputs and inputs so that
Yi ? [?0.5, 0.5] and kXi k ? 1. The results for 5-fold train-test split of the data together with basic
descriptions of the datasets are presented in Table 1.
6
Conclusions and future work
We derived a new PAC-Bayesian bound that controls the convergence of averages of empirical variances of losses of hypotheses in H to averages of expected variances of losses of hypothesis in H simultaneously for all averaging distributions ?. This bound is an interesting example of combination
7
0.25
0.2
0.4
0.3
PB?EB
PB?kl
Train error
Test error
0.4
0.3
0.5
Expected loss
0.3
0.5
PB?EB
PB?kl
Train error
Test error
Expected loss
0.5
PB?EB
PB?kl
Train error
Test error
Expected loss
Expected loss
0.4
0.35
0.4
PB?EB
PB?kl
Train error
Test error
0.3
0.2
1000 2000 3000
Sample size
0.2
1000 2000 3000
Sample size
(a) d = 2, kw0 k0 = 2
(b) d = 5, kw0 k0 = 2
1000 2000 3000
Sample size
(c) d = 3, kw0 k0 = 3
0.2
1000 2000 3000
Sample size
(d) d = 6, kw0 k0 = 3
Figure 2: The values of the PAC-Bayes-kl and PAC-Bayes-Empirical-Bernstein bounds together
with the test and train errors on synthetic data. The values are averaged over the 10 random draws
of training and test sets.
Table 1: Results for the UCI datasets
Dataset
winequality
parkinsons
concrete
n
6497
5875
1030
d
11
16
8
Train
0.106 ? 0.0005
0.188 ? 0.0014
0.110 ? 0.0008
Test
0.106 ? 0.0022
0.188 ? 0.0055
0.111 ? 0.0038
PB-kl bound
0.175 ? 0.0006
0.266 ? 0.0013
0.242 ? 0.0010
PB-EB bound
0.162 ? 0.0006
0.250 ? 0.0012
0.264 ? 0.0011
of PAC-Bayesian bounding technique with concentration inequalities for self-bounding functions.
We applied the bound to derive the PAC-Bayes-Empirical-Bernstein inequality which is a powerful
Bernstein-type inequality outperforming the state-of-the-art PAC-Bayes-kl inequality of Seeger [7]
in situations, where the empirical variance is smaller than the empirical loss and otherwise comparable to PAC-Bayes-kl. We also demonstrated an empirical advantage of the new PAC-BayesEmpirical-Bernstein inequality over the PAC-Bayes-kl inequality on several synthetic and real-life
regression datasets.
Our work opens a number of interesting directions for future research. One of the most important of
them is to derive algorithms that will directly minimize the PAC-Bayes-Empirical-Bernstein bound.
Another interesting direction would be to decrease the last term in the bound in Theorem 3, as it is
done in the PAC-Bayes-kl inequality. This can probably be achieved by deriving a PAC-Bayes-kl
inequality for the variance.
Acknowledgments
The authors are thankful to Anton Osokin for useful discussions and to the anonymous reviewers
for their comments. This research was supported by an Australian Research Council Australian
Laureate Fellowship (FL110100281) and a Russian Foundation for Basic Research grants 13-0700677, 14-07-00847.
References
[1] John Langford and John Shawe-Taylor. PAC-Bayes & margins. In Advances in Neural Information Processing Systems (NIPS), 2002.
[2] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51(1), 2003.
[3] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine
Learning Research, 2005.
[4] Yevgeny Seldin and Naftali Tishby. PAC-Bayesian analysis of co-clustering and beyond. Journal of Machine Learning Research, 11, 2010.
[5] Matthew Higgs and John Shawe-Taylor. A PAC-Bayes bound for tailored density estimation.
In Proceedings of the International Conference on Algorithmic Learning Theory (ALT), 2010.
8
[6] Yevgeny Seldin, Peter Auer, Franc?ois Laviolette, John Shawe-Taylor, and Ronald Ortner. PACBayesian analysis of contextual bandits. In Advances in Neural Information Processing Systems (NIPS), 2011.
[7] Matthias Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. Journal of Machine Learning Research, 2002.
[8] Yevgeny Seldin, Franc?ois Laviolette, Nicol`o Cesa-Bianchi, John Shawe-Taylor, and Peter
Auer. PAC-Bayesian inequalities for martingales. IEEE Transactions on Information Theory,
58, 2012.
[9] Andreas Maurer and Massimiliano Pontil. Empirical Bernstein bounds and sample variance
penalization. In Proceedings of the International Conference on Computational Learning Theory (COLT), 2009.
[10] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. John Wiley & Sons,
1991.
[11] Andreas Maurer. A note on the PAC-Bayesian theorem. www.arxiv.org, 2004.
[12] David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37, 1999.
[13] A.W. Van Der Vaart. Asymptotic statistics. Cambridge University Press, 1998.
[14] St?ephane Boucheron, G?abor Lugosi, and Olivier Bousquet. Concentration inequalities. In
O. Bousquet, U.v. Luxburg, and G. R?atsch, editors, Advanced Lectures in Machine Learning.
Springer, 2004.
[15] A. Asuncion and D.J. Newman.
UCI machine
http://www.ics.uci.edu/?mlearn/MLRepository.html.
9
learning
repository,
2007.
| 4903 |@word repository:2 version:2 open:1 queensland:1 pick:2 moment:2 necessity:1 substitution:2 outperforms:1 existing:1 com:2 comparing:1 contextual:1 gmail:2 john:7 ronald:1 numerical:2 fn:5 subsequent:1 joy:1 plane:1 provides:4 org:1 dn:2 c2:8 shorthand:2 combine:2 fitting:1 expected:21 roughly:2 behavior:3 examine:1 solver:1 provided:4 bounded:8 notation:1 medium:1 unrelaxed:1 minimizes:2 compressive:1 guarantee:3 berkeley:1 k2:1 control:3 unit:2 grant:1 before:2 lugosi:1 eb:23 resembles:2 co:1 range:1 averaged:2 acknowledgment:1 practical:1 practice:1 union:2 pontil:1 area:1 empirical:46 significantly:8 word:1 get:2 cannot:2 close:2 selection:1 risk:3 impossible:1 www:2 optimize:1 deterministic:1 equivalent:1 demonstrated:1 reviewer:1 simplicity:1 immediately:1 rule:9 importantly:1 deriving:1 coordinate:3 qkp:4 construction:1 exact:1 olivier:1 quadprog:1 hypothesis:13 origin:3 element:2 role:1 solved:1 worst:2 ensures:2 trade:1 decrease:3 mentioned:2 complexity:2 ideally:1 pinsker:1 smart:1 creates:1 k0:6 various:1 derivation:1 train:7 massimiliano:1 fast:2 newman:1 supplementary:4 tightness:2 otherwise:4 statistic:2 vaart:1 sequence:1 advantage:3 matthias:1 took:1 product:1 remainder:1 uci:8 combining:2 relevant:1 academy:1 forth:1 description:1 convergence:3 requirement:2 generating:3 converges:1 thankful:1 derive:5 depending:1 minor:1 ois:2 implies:1 australian:2 differ:1 direction:2 radius:4 closely:1 stochastic:1 pacbayes:1 centered:3 mcallester:4 material:4 generalization:4 anonymous:1 tighter:13 hold:9 around:2 considered:1 ic:1 algorithmic:1 claim:1 matthew:1 major:1 achieves:1 wine:1 estimation:1 label:1 currently:1 pacbayesian:3 council:1 tool:1 weighted:6 minimization:4 clearly:1 gaussian:1 rather:3 pn:1 parkinson:2 derived:3 ax:1 improvement:3 bernoulli:1 seeger:5 contrast:1 dependent:2 abor:1 bandit:1 fl110100281:1 arg:1 classification:3 flexible:1 colt:1 denoted:1 winequality:1 html:1 art:1 uc:1 never:2 sampling:2 unsupervised:1 future:2 ephane:1 few:2 franc:2 ortner:1 randomly:1 simultaneously:17 divergence:4 preserve:1 replaced:2 ourselves:1 n1:1 interest:1 tolstikhin:2 possibility:1 analyzed:1 yielding:1 behind:1 maurer:3 taylor:4 theoretical:1 minimal:1 cover:1 uniform:2 tishby:1 kxi:2 synthetic:7 combined:1 st:1 cited:1 density:1 randomized:3 international:2 accessible:1 sequel:1 off:1 picking:1 together:3 ilya:1 concrete:2 cesa:1 choose:2 worse:2 centred:2 satisfy:2 explicitly:3 depends:3 multiplicative:1 higgs:1 start:2 bayes:42 recover:2 kuk0:1 asuncion:1 minimize:2 ni:1 variance:40 yield:2 anton:1 bayesian:36 produced:1 confirmed:1 mlearn:1 definition:3 associated:2 proof:12 dataset:2 knowledge:1 organized:1 auer:2 supervised:2 follow:1 improved:1 done:2 furthermore:1 implicit:1 langford:2 sketch:2 working:1 hand:3 replacing:1 quality:1 grows:1 russian:2 normalized:1 true:1 unbiased:1 regularization:1 boucheron:1 leibler:1 nonzero:2 self:4 inferior:2 naftali:1 illustrative:2 mlrepository:1 prominent:1 presenting:2 complete:1 l1:2 ranging:1 meaning:1 sigmoid:2 empirically:1 tightens:1 volume:1 slight:1 numerically:1 kwk2:1 cambridge:1 gibbs:5 ai:3 tuning:1 rd:2 grid:5 centre:1 shawe:4 access:1 base:1 posterior:14 closest:2 inequality:51 binary:5 outperforming:1 life:1 yi:7 w30:1 der:1 seen:1 greater:9 relaxed:1 dashed:1 ii:1 multiple:1 full:1 technical:1 cross:2 long:1 e1:1 impact:2 prediction:8 involving:1 regression:10 basic:2 arxiv:1 tailored:1 invert:1 achieved:1 c1:6 background:2 whereas:1 fellowship:1 interval:7 strict:1 comment:2 subject:1 nv:2 kwk22:2 probably:1 bernstein:32 split:1 easy:1 enough:1 finish:1 restrict:1 suboptimal:2 andreas:2 computable:1 akin:1 peter:2 matlab:1 generally:2 useful:1 detailed:2 http:1 tutorial:1 delta:1 diverse:1 key:1 pb:45 drawn:1 graph:1 relaxation:1 asymptotically:1 luxburg:1 powerful:2 place:1 vn:20 draw:8 prefer:1 comparable:4 entirely:1 bound:81 fold:1 quadratic:1 yielded:1 strength:1 incorporation:1 precisely:1 worked:1 unlimited:1 bousquet:2 aspect:1 min:4 relatively:3 according:4 combination:2 ball:6 smaller:3 slightly:4 son:1 sided:2 taken:2 ln:48 equation:4 kw0:6 generalizes:1 tightest:2 apply:4 progression:1 substitute:2 thomas:2 clustering:1 ensure:1 laviolette:2 uj:1 w20:2 classical:1 already:1 quantity:2 strategy:1 concentration:7 said:1 w0:2 trivial:2 providing:1 ratio:6 telemonitoring:1 statement:1 holding:1 potentially:1 stated:1 negative:1 design:2 unknown:1 bianchi:1 upper:10 datasets:10 situation:2 arbitrary:1 introduced:1 david:2 pair:1 namely:1 kl:59 extensive:1 distinction:1 nip:2 efn:1 able:1 suggested:2 beyond:1 below:4 usually:1 mismatch:1 summarize:1 program:1 eh:5 regularized:2 advanced:1 technology:1 text:1 prior:3 review:2 geometric:1 nicol:1 relative:1 qkl:1 asymptotic:1 loss:47 lecture:1 interesting:5 limitation:1 var:1 validation:2 foundation:1 penalization:1 editor:1 supported:3 last:3 soon:1 formal:1 bias:1 side:3 normalised:1 taking:1 distributed:2 van:1 xn:1 cumulative:1 author:1 reinforcement:1 osokin:1 transaction:1 laureate:1 wrote:1 xi:12 search:1 reality:1 table:2 nvn:3 substituted:1 main:3 bounding:13 yevgeny:5 noise:2 n2:1 suffering:1 x1:1 martingale:2 wiley:1 explicit:3 lie:1 hw:8 theorem:39 down:1 specific:1 pac:78 alt:1 dominates:1 importance:2 magnitude:1 margin:1 gap:2 seldin:12 expressed:1 applies:1 springer:1 satisfies:1 w10:2 specifically:1 except:2 operates:1 uniformly:2 averaging:2 lemma:3 e:2 domination:1 hw0:5 rarely:2 atsch:1 brevity:2 |
4,314 | 4,904 | Regularized M -estimators with nonconvexity:
Statistical and algorithmic theory for local optima
Martin J. Wainwright
Departments of Statistics and EECS
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Po-Ling Loh
Department of Statistics
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
We establish theoretical results concerning local optima of regularized M estimators, where both loss and penalty functions are allowed to be nonconvex.
Our results show that as long as the loss satisfies restricted strong convexity and
the penalty satisfies suitable regularity conditions, any local optimum of the composite objective lies within statistical precision of the true parameter vector. Our
theory covers a broad class of nonconvex objective functions, including corrected
versions of the Lasso for errors-in-variables linear models and regression in generalized linear models using nonconvex regularizers such as SCAD and MCP. On
the optimization side, we show that a simple adaptation of composite gradient descent may be used to compute a global optimum up to the statistical precision stat
in log(1/stat ) iterations, the fastest possible rate for any first-order method. We
provide simulations to illustrate the sharpness of our theoretical predictions.
1
Introduction
Optimization of nonconvex functions is known to be computationally intractable in general [11, 12].
Unlike convex functions, nonconvex functions may possess local optima that are not global optima,
and standard iterative methods such as gradient descent and coordinate descent are only guaranteed
to converge to local optima. Although statistical results regarding nonconvex M -estimation often
only provide guarantees about the accuracy of global optima, it is observed empirically that the local
optima obtained by various estimation algorithms seem to be well-behaved.
In this paper, we study the question of whether it is possible to certify ?good? behavior, in both a
statistical and computational sense, for various nonconvex M -estimators. On the statistical level,
we provide an abstract result, applicable to a broad class of (potentially nonconvex) M -estimators,
which bounds the distance between any local optimum and the unique minimum of the population
risk. Although local optima of nonconvex objectives may not coincide with global optima, our
theory shows that any local optimum is essentially as good as a global optimum from a statistical
perspective. The class of M -estimators covered by our theory includes the modified Lasso as a
special case, but our results are much stronger than those implied by previous work [6].
In addition to nonconvex loss functions, our theory also applies to nonconvex regularizers, shedding
new light on a long line of recent work involving the nonconvex SCAD and MCP regularizers [3, 2,
13, 14]. Various methods have been proposed for optimizing convex loss functions with nonconvex
penalties [3, 4, 15], but these methods are only guaranteed to generate local optima of the composite
objective, which have not been proven to be well-behaved. In contrast, our work provides a set
of regularity conditions under which all local optima are guaranteed to lie within a small ball of
the population-level minimum, ensuring that standard methods such as projected and composite
gradient descent [10] are sufficient for obtaining estimators that lie within statistical error of the
1
truth. In fact, we establish that under suitable conditions, a modified form of composite gradient
descent only requires log(1/stat ) iterations to obtain a solution that is accurate up to the statistical
precision stat .
Notation. For functions f (n) and g(n), we write f (n) - g(n) to mean that f (n) ? cg(n) for
some universal constant c ? (0, ?), and similarly, f (n) % g(n) when f (n) ? c0 g(n) for some
universal constant c0 ? (0, ?). We write f (n) g(n) when f (n) - g(n) and f (n) % g(n) hold
simultaneously. For a function h : Rp ? R, we write ?h to denote a gradient or subgradient, if it
exists. Finally, for q, r > 0, let Bq (r) denote the `q -ball of radius r centered around 0.
2
Problem formulation
In this section, we develop some general theory for regularized M -estimators. We first establish
notation, then discuss assumptions for nonconvex regularizers and losses studied in our paper.
2.1
Background
Given a collection of n samples Z1n = {Z1 , . . . , Zn }, drawn from a marginal distribution P over a
space Z, consider a loss function Ln : Rp ? (Z)n ? R. The value Ln (?; Z1n ) serves as a measure
of the ?fit? between a parameter vector ? ? Rp and the observed data. This empirical loss function
should be viewed as a surrogate to the population risk function L : Rp ? R, given by
L(?) := EZ Ln (?; Z1n ) .
Our goal is to estimate the parameter vector ? ? := arg minp L(?) that minimizes the population
??R
risk, assumed to be unique.
To this end, we consider a regularized M -estimator of the form
?b ? arg min {Ln (?; Z1n ) + ?? (?)} ,
g(?)?R
(1)
where ?? : Rp ? R is a regularizer, depending on a tuning parameter ? > 0, which serves to
enforce a certain type of structure on the solution. In all cases, we consider regularizers
that are
Pp
separable across coordinates, and with a slight abuse of notation, we write ?? (?) = j=1 ?? (?j ).
Our theory allows for possible nonconvexity in both the loss function Ln and the regularizer ?? .
Due to this potential nonconvexity, our M -estimator also includes a side constraint g : Rp ? R+ ,
which we require to be a convex function satisfying the lower bound g(?) ? k?k1 , for all ? ? Rp .
Consequently, any feasible point for the optimization problem (1) satisfies the constraint k?k1 ? R,
and as long as the empirical loss and regularizer are continuous, the Weierstrass extreme value
theorem guarantees that a global minimum ?b exists.
2.2
Nonconvex regularizers
We now state and discuss conditions on the regularizer, defined in terms of ?? : R ? R.
Assumption 1.
(i) The function ?? satisfies ?? (0) = 0 and is symmetric around zero (i.e., ?? (t) = ?? (?t)
for all t ? R).
(ii) On the nonnegative real line, the function ?? is nondecreasing.
(iii) For t > 0, the function t 7?
?? (t)
t
is nonincreasing in t.
(iv) The function ?? is differentiable for all t 6= 0 and subdifferentiable at t = 0, with nonzero
subgradients at t = 0 bounded by ?L.
(v) There exists ? > 0 such that ??,? (t) := ?? (t) + ?t2 is convex.
Many regularizers that are commonly used in practice satisfy Assumption 1, including the `1 -norm,
?? (?) = k?k1 , and the following commonly used nonconvex regularizers:
2
SCAD penalty: This penalty, due to Fan and Li [3], takes the form
?
for |t| ? ?,
??|t|,
2
2
?? (t) := ?(t ? 2a?|t| + ? )/(2(a ? 1)), for ? < |t| ? a?,
?
(a + 1)?2 /2,
for |t| > a?,
where a > 2 is a fixed parameter. Assumption 1 holds with L = 1 and ? =
(2)
1
a?1 .
MCP regularizer: This penalty, due to Zhang [13], takes the form
Z |t|
z
1?
?? (t) := sign(t) ? ?
dz,
?b +
0
(3)
where b > 0 is a fixed parameter. Assumption 1 holds with L = 1 and ? = 1b .
2.3
Nonconvex loss functions and restricted strong convexity
Throughout this paper, we require the loss function Ln to be differentiable, but we do not require it
to be convex. Instead, we impose a weaker condition known as restricted strong convexity (RSC).
Such conditions have been discussed in previous literature [9, 1], and involve a lower bound on the
remainder in the first-order Taylor expansion of Ln . In particular, our main statistical result is based
on the following RSC condition:
?
log p
?
2
?
k?k21 ,
?k?k2 ? 1,
(4a)
? ?1 k?k2 ? ?1
n
?
?
r
h?Ln (? + ?) ? ?Ln (? ), ?i ?
?
?
? ?2 k?k2 ? ?2 log p k?k1 , ?k?k2 ? 1,
(4b)
n
where the ?j ?s are strictly positive constants and the ?j ?s are nonnegative constants.
To understand this condition, note that if Ln were actually strongly convex, then both these RSC
inequalities would hold with ?1 = ?2 > 0 and ?1 = ?2 = 0. However, in the high-dimensional
setting (p n), the empirical loss Ln can never be strongly convex, but the RSC condition may
still hold with strictly positive (?j , ?j ). On the other hand, if Ln is convex (but not strongly convex),
the left-hand expression
q in inequality (4) is always
q nonnegative, so inequalities (4a) and (4b) hold
k?k1
?1 n
?2
n
1
trivially for k?k
?
?
and
k?k2
?1 log p
k?k2
?2
log p , respectively. Hence, the RSC inequalities
n
o
q
n
1
only enforce a type of strong convexity condition over a cone set of the form k?k
?
c
k?k2
log p .
3
Statistical guarantees and consequences
We now turn to our main statistical guarantees and some consequences for various statistical models.
Our theory applies to any vector ?e ? Rp that satisfies the first-order necessary conditions to be a
local minimum of the program (1):
e + ??? (?),
e ? ? ?i
e ? 0,
h?Ln (?)
for all feasible ? ? Rp .
(5)
When ?e lies in the interior of the constraint set, condition (5) is the usual zero-subgradient condition.
3.1
Main statistical results
Our main theorem is deterministic in nature, and specifies conditions on the regularizer, loss function, and parameters, which guarantee that any local optimum ?e lies close to the target vector
? ? = arg minp L(?). Corresponding probabilistic results will be derived in subsequent sections. For
??R
proofs and more detailed discussion of the results contained in this paper, see the technical report [7].
Theorem 1. Suppose the regularizer ?? satisfies Assumption 1, Ln satisfies the RSC conditions (4)
with ?1 > ?, and ? ? is feasible for the objective. Consider any choice of ? such that
(
)
r
2
?2
log p
?
? max k?Ln (? )k? , ?2
? ? ?
,
(6)
L
n
6RL
3
2
2
2
16R max(?1 ,?2 )
log p. Then any vector ?e satisfying the first-order necessary conand suppose n ?
?22
ditions (5) satisfies the error bounds
?
7?L k
56?Lk
?
e
k? ? ? k2 ?
,
and
k?e ? ? ? k1 ?
,
(7)
4(?1 ? ?)
4(?1 ? ?)
where k = k? ? k0 .
From the bound (7), note that the squared `2 -error grows proportionally with k, the number of non2
zeros in the target
q parameter, and with ? . As will be clarified in the following sections, choosing ?
proportional to
log p
n
and R proportional to
1
?
will satisfy the requirements of Theorem 1 w.h.p. for
many statistical models, in which case we have a squared `2 -error that scales as
k log p
n ,
as expected.
Remark 1. It is worthwhile to discuss the quantity ?1 ? ? appearing in the denominator of the
bound in Theorem 1. Recall that ?1 measures the level of curvature of the loss function Ln , while ?
measures the level of nonconvexity of the penalty ?? . Intuitively, the two quantities should play opposing roles in our result: Larger values of ? correspond to more severe nonconvexity of the penalty,
resulting in worse behavior of the overall objective (1), whereas larger values of ?1 correspond to
more (restricted) curvature of the loss, leading to better behavior.
We now develop corollaries for various nonconvex loss functions and regularizers of interest.
3.2
Corrected linear regression
We begin by considering the case of high-dimensional linear regression with systematically corrupted observations. Recall that in the framework of ordinary linear regression, we have the model
yi = h? ? , xi i + i ,
| {z }
P
p
j=1
for i = 1, . . . , n,
(8)
?j? xij
where ? ? ? Rp is the unknown parameter vector and {(xi , yi )}ni=1 are observations. Following Loh
and Wainwright [6], assume we instead observe pairs {(zi , yi )}ni=1 , where the zi ?s are systematically
corrupted versions of the corresponding xi ?s. Some examples include the following:
(a) Additive noise: Observe zi = xi + wi , where wi ?? xi , E[wi ] = 0, and cov[wi ] = ?w .
(b) Missing data: For ? ? [0, 1), observe zi ? Rp such that for each component j, we independently observe zij = xij with probability 1 ? ?, and zij = ? with probability ?.
We use the population and empirical loss functions
L(?) =
1 T
? ?x ? ? ? ?T ?x ?,
2
and
Ln (?) =
1 Tb
? ?? ? ?
bT ?,
2
(9)
b ?
where (?,
b) are estimators for (?x , ?x ? ? ) depending on {(zi , yi )}ni=1 . Then ? ? = arg min? L(?).
From the formulation (1), the corrected linear regression estimator is given by
1 Tb
?b ? arg min
? ?? ? ?
bT ? + ?? (?) .
(10)
g(?)?R 2
We now state a corollary in the case of additive noise (model (a)), where we take
T
b = Z Z ? ?w ,
?
n
and
?
b=
ZT y
.
n
(11)
b in equation (11) is always negative-definite, so the empirical loss function
When p n, the matrix ?
b are applicable to missing data (model
Ln previously defined (9) is nonconvex. Other choices of ?
(b)), and also lead to nonconvex programs [6].
4
Corollary 1. Suppose we have i.i.d. observations {(zi , yi )}ni=1 from a corrupted linear model with
sub-Gaussian additive noise. Suppose (?, R) are chosen such that ? ? is feasible and
r
log p
c0
c
??? .
n
R
Then given a sample size n ? C max{R2 , k} log p, any local optimum ?e of the nonconvex program (10) satisfies the estimation error bounds
?
c00 ?k
c0 ? k
?
e
,
and
k?e ? ? ? k1 ?
,
k? ? ? k2 ?
?min (?x ) ? 2?
?min (?x ) ? 2?
with probability at least 1 ? c1 exp(?c2 log p), where k? ? k0 = k.
q
?
Remark 2. When ?? (?) = ?k?k1 and g(?) = k?k1 , taking ? logn p and R = b0 k for some
constant b0 ? k? ? k2 yields the required scaling n % k log p. Hence, the bounds in Corollary 1
agree with bounds in Theorem 1 of Loh and Wainwright [6]. Note, however, that the latter results
are stated only for a global minimum ?b of the program (10), whereas Corollary 1 is a much stronger
e Theorem 2 of our earlier paper [6] provides an indirect
result holding for any local minimum ?.
route for establishing similar bounds on k?e ? ? ? k1 and k?e ? ? ? k2 , since the projected gradient
descent algorithm may become stuck in local minima. In contrast, our argument here does not rely
on an algorithmic proof and applies to a more general class of (possibly nonconvex) penalties.
Corollary 1 also has important consequences in the case where pairs {(xi , yi )}ni=1 from the linear
model (8) are observed without corruption and ?? is nonconvex. Then the empirical loss Ln is
equivalent to the least-squares loss, modulo a constant factor. Much existing work [3, 14] only
establishes statistical consistency of global minima and then provides specialized algorithms for
obtaining specific local optima that are provably close to global optima. In contrast, our results
demonstrate that any optimization algorithm converging to a local optimum suffices.
3.3
Generalized linear models
Moving beyond linear regression, we now consider the case where observations are drawn from a
generalized linear model (GLM). Recall that a GLM is characterized by the conditional distribution
yi h?, xi i ? ?(xTi ?)
P(yi | xi , ?, ?) = exp
,
c(?)
where ? > 0 is a scale parameter and ? is the cumulant function. By standard properties of exponential families [8, 5], we have
? 0 (xTi ?) = E[yi | xi , ?, ?].
In our analysis, we assume there exists ?u > 0 such that ? 00 (t) ? ?u for all t ? R. This boundedness assumption holds in various settings, including linear regression, logistic regression, and
multinomial regression. The bound is required to establish both statistical consistency results in the
present section and fast global convergence guarantees for our optimization algorithms in Section 4.
We will assume that ? ? is sparse and optimize the penalized maximum likelihood program
)
( n
X
1
T
T
?(xi ?) ? yi xi ? + ?? (?) .
?b ? arg min
n i=1
g(?)?R
(12)
We then have the following corollary:
Corollary 2. Suppose we have i.i.d. observations {(xi , yi )}ni=1 from a GLM, where the xi ?s are
sub-Gaussian. Suppose (?, R) are chosen such that ? ? is feasible and
r
c0
log p
c
??? .
n
R
2
e
Given a sample size n ? CR log p, any local optimum ? of the nonconvex program (12) satisfies
?
c0 ? k
c00 ?k
?
e
k? ? ? k2 ?
,
and
k?e ? ? ? k1 ?
,
?min (?x ) ? 2?
?min (?x ) ? 2?
with probability at least 1 ? c1 exp(?c2 log p), where k? ? k0 = k.
5
4
Optimization algorithm
We now describe how a version of composite gradient descent may be applied to efficiently optimize
the nonconvex program (1). We focus on a version of the optimization problem with the side function
o
1n
g?,? (?) :=
(13)
?? (?) + ?k?k22 ,
?
which is convex by Assumption 1. We may then write the program (1) as
n
o
?b ? arg min
Ln (?) ? ?k?k22 +?g?,? (?) .
(14)
g?,? (?)?R |
{z
}
?n
L
The objective function then decomposes nicely into a sum of a differentiable but nonconvex function
and a possibly nonsmooth but convex penalty. Applied to the representation (14), the composite
gradient descent procedure of Nesterov [10] produces a sequence of iterates {? t }?
t=0 via the updates
(
)
2
t
?L
?
(?
)
1
n
? ? ? t ?
+ g?,? (?) ,
? t+1 ? arg min
(15)
2
?
?
g?,? (?)?R
2
where
1
?
is the stepsize. Define the Taylor error around ?2 in the direction ?1 ? ?2 by
T (?1 , ?2 ) := Ln (?1 ) ? Ln (?2 ) ? h?Ln (?2 ), ?1 ? ?2 i.
(16)
For all vectors ?2 ? B2 (3) ? B1 (R), we require the following form of restricted strong convexity:
?
log p
?
2
?
k?1 ? ?2 k21 ,
?k?1 ? ?2 k2 ? 3,
(17a)
? ?1 k?1 ? ?2 k2 ? ?1
n
r
T (?1 , ?2 ) ?
?
?
? ?2 k?1 ? ?2 k2 ? ?2 log p k?1 ? ?2 k1 ,
?k?1 ? ?2 k2 ? 3.
(17b)
n
The conditions (17) are similar but not identical to the earlier RSC conditions (4). The main
difference is that we now require the Taylor difference to be bounded below uniformly over
?2 ? B2 (3) ? B1 (R), as opposed to for a fixed ?2 = ? ? . We also assume an upper bound:
log p
T (?1 , ?2 ) ? ?3 k?1 ? ?2 k22 + ?3
k?1 ? ?2 k21 ,
for all ?1 , ?2 ? Rp ,
(18)
n
a condition referred to as restricted smoothness in past work [1]. Throughout this section, we assume ?i > ? for all i, where ? is the coefficient ensuring the convexity of the function g?,? from
equation (13). Furthermore, we define ? = min{?1 , ?2 } and ? = max{?1 , ?2 , ?3 }.
The following theorem applies to any population loss function L for which the population minimizer
? ? is k-sparse and k? ? k2 ? 1, and under the scaling n > Ck log p, for a constant C depending on
the ?i ?s and ?i ?s. We show that the composite gradient updates (15) exhibit a type of globally
geometric convergence in terms of the quantity
???
4?
128? k logn p
.
1 ? ?(n, p, k)
???
Under the stated scaling on the sample size, we are guaranteed that ? ? (0, 1). Let
0
b
?)
2 log ?(? )??(
?2
log 2
?RL
?
T (?) :=
+ 1+
log log
,
log(1/?)
log(1/?)
?2
where ?(?) := Ln (?) + ?? (?), and define stat := k?b ? ? ? k2 .
? :=
1?
+ ?(n, p, k)
,
where ?(n, p, k) :=
(19)
(20)
Theorem 2. Suppose Ln satisfies the RSC/RSM conditions (17) and (18), and suppose ?? satisfies
Assumption 1. Suppose ?b is any global minimum of the program (14), with
(
)
r
r
log p
4
log p
?
R
? c,
and
? ? ? max k?Ln (? )k? , ?
.
n
L
n
c2
stat
Then for any stepsize ? ? 2 ? max{?3 ? ?, ?} and tolerance ? 2 ? 1??
, we have
4
2
?
k log p 2
b 2 ?
k? t ? ?k
?2 +
+ 128?
stat ,
?t ? T ? (?).
2
???
?
n
6
(21)
Remark 3. Note that for the optimal choice of tolerance parameter ? stat , the bound in inequalc2stat
ity (21) takes the form ???
, meaning successive iterates are guaranteed to converge to a region
b Combining Theorems 1 and 2, we have
within statistical accuracy of the true global optimum ?.
!
r
n
o
k log p
t
t
?
b
,
?t ? T (c0 stat ).
max k? ? ?k2 , k? ? ? k2 = O
n
5
Simulations
In this section, we report the results of simulations for two versions of the loss function Ln , corresponding to linear and logistic regression, and three penalty functions: Lasso,
q SCAD, and MCP. In
all cases, we chose regularization parameters R =
1.1
?
? ?? (? ? ) and ? =
log p
n .
Linear regression: In the case of linear regression, we simulated covariates corrupted by additive
noise according to the mechanism described in Section 3.2, giving the estimator
1 T XT X
yT Z
?b ? arg min
?
? ?w ? ?
? + ?? (?) .
(22)
n
n
g?,? (?)?R 2
We generated i.i.d. samples xi ? N (0, I) and i ? N (0, (0.1)2 ), and set ?w = (0.2)2 I.
Logistic regression: In the case of logistic regression, we generated i.i.d. samples xi ? N (0, I).
Since ?(t) = log(1 + exp(t)), the program (12) becomes
)
( n
X
1
{log(1 + exp(h?, xi i) ? yi h?, xi i} + ?? (?) .
(23)
?b ? arg min
n i=1
g?,? (?)?R
We optimized the programs (22) and (23) using the composite gradient updates (15). Figure 1
shows the results of corrected linear regression with Lasso, SCAD, and MCP
? regularizers for three
different problem sizes p. In each case, ? ? is a k-sparse vector with k = b pc, where the nonzero
entries were generated from a normal distribution and the vector was then rescaled so k? ? k2 = 1.
As predicted by Theorem 1, the curves corresponding to the same penalty function stack up nicely
n
when the estimation error k?b ? ? ? k2 is plotted against the rescaled sample size k log
p , and the `2 error decreases to zero as the number of samples increases, showing that the estimators (22) and (23)
are statistically consistent. We chose the parameter a = 3.7 for SCAD and b = 3.5 for MCP.
comparing penalties for corrected linear regression
0.5
0.3
0.2
0.1
0
0
p=128
p=256
p=512
0.9
l2 norm error
0.4
l2 norm error
comparing penalties for logistic regression
1
p=128
p=256
p=512
0.8
0.7
0.6
0.5
10
20
30
n/(k log p)
40
0.4
0
50
(a)
20
40
60
n/(k log p)
80
100
(b)
Figure 1. Plots showing statistical consistency of (a) linear and (b) logistic regression with Lasso,
SCAD, and MCP. Each point represents an average over 20 trials. The estimation error k?b ? ? ? k2
n
is plotted against the rescaled sample size k log
. Lasso, SCAD, and MCP results are represented by
p
solid, dotted, and dashed lines, respectively.
The simulations in Figure 2 depict the optimization-theoretic conclusions of Theorem 2. Each panel
shows two different families of curves, corresponding to statistical error (red) and optimization error
7
(blue). The vertical axis measures the `2 -error on a log scale, while the horizontal axis tracks the
iteration number. The curves were obtained?by running composite gradient descent from 10 random
starting points. We used p = 128, k = b pc, and n = b20k log pc. As predicted by our theory,
the optimization error decreases at a linear rate until it falls to the level of statistical error. Panels
(b) and (c) provide simulations for two values of the SCAD parameter a; the larger choice a = 3.7
corresponds to a higher level of curvature and produces a tighter cluster of local optima.
log error plot for corrected linear regression with MCP, b = 1.5
2
opt err
stat err
0
log error plot for corrected linear regression with SCAD, a = 3.7
2
opt err
stat err
0
log error plot for corrected linear regression with SCAD, a = 2.5
1
opt err
stat err
0
?2
?2
?1
?8
||2)
?
?3
t
?4
log(||
?6
log(||
log(||
t
t
?
?
*
?4
*
||2)
*
||2)
?2
?6
?4
?5
?6
?8
?10
?12
0
200
400
600
iteration count
800
?10
0
1000
?7
200
400
(a)
600
800
iteration count
1000
?8
0
1200
200
400
(b)
600
800
iteration count
1000
1200
(c)
Figure 2. Plots illustrating linear rates of convergence for corrected linear regression with MCP and
SCAD. Red lines depict statistical error log k?b ? ? ? k2 and blue lines depict optimization error
b 2 . As predicted by Theorem 2, the optimization error decreases linearly up to statistical
log k? t ? ?k
accuracy. Each plot shows the solution trajectory for 10 initializations of composite gradient descent.
Panel (a) shows results for MCP; panels (b) and (c) show results for SCAD with different values of a.
?
Figure 3 provides analogous results to Figure 2 for logistic regression, using p = 64, k = b pc, and
n = b20k log pc. The plot shows solution trajectories for 20 different initializations of composite
gradient descent. Again, the log optimization error decreases at a linear rate up to the level of
statistical error, as predicted by Theorem 2. Whereas the convex Lasso penalty yields a unique
b SCAD and MCP produce multiple local optima.
local/global optimum ?,
log error plot for logistic regression with Lasso
1
log error plot for logistic regression with SCAD, a = 3.7
0.5
opt err
stat err
0
||2)
?1
?6
500
1000
iteration count
(a)
1500
2000
?
?1.5
t
?
?5
?7
0
?1.5
t
?2
log(||
log(||
t
log(||
?4
?1
*
||2)
?0.5
*
*
2
|| )
?
?3
opt err
stat err
0
?0.5
?1
?2
log error plot for logistic regression with MCP, b = 3
0.5
opt err
stat err
0
?2.5
?2
?2.5
?3
?3
?3.5
?3.5
?4
0
500
1000
iteration count
(b)
1500
2000
?4
0
500
1000
iteration count
1500
2000
(c)
Figure 3. Plots showing linear rates of convergence on a log scale for logistic regression. Red lines
depict statistical error and blue lines depict optimization error. (a) Lasso penalty. (b) SCAD penalty.
(c) MCP. Each plot shows the solution trajectory for 20 initializations of composite gradient descent.
6
Discussion
We have analyzed theoretical properties of local optima of regularized M -estimators, where both
the loss and penalty function are allowed to be nonconvex. Our results are the first to establish that
all local optima of such nonconvex problems are close to the truth, implying that any optimization
method guaranteed to converge to a local optimum will provide statistically consistent solutions. We
show that a variant of composite gradient descent may be used to obtain near-global optima in linear
time, and verify our theoretical results with simulations.
Acknowledgments
PL acknowledges support from a Hertz Foundation Fellowship and an NSF Graduate Research Fellowship. MJW and PL were also partially supported by grants NSF-DMS-0907632 and AFOSR09NL184. The authors thank the anonymous reviewers for helpful feedback.
8
References
[1] A. Agarwal, S. Negahban, and M. J. Wainwright. Fast global convergence of gradient methods
for high-dimensional statistical recovery. Annals of Statistics, 40(5):2452?2482, 2012.
[2] P. Breheny and J. Huang. Coordinate descent algorithms for nonconvex penalized regression,
with applications to biological feature selection. Annals of Applied Statistics, 5(1):232?253,
2011.
[3] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96:1348?1360, 2001.
[4] D. R. Hunter and R. Li. Variable selection using MM algorithms. Annals of Statistics,
33(4):1617?1642, 2005.
[5] E.L. Lehmann and G. Casella. Theory of Point Estimation. Springer Verlag, 1998.
[6] P. Loh and M.J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. Annals of Statistics, 40(3):1637?1664, 2012.
[7] P. Loh and M.J. Wainwright. Regularized M -estimators with nonconvexity: Statistical and
algorithmic theory for local optima. arXiv e-prints, May 2013. Available at http://arxiv.
org/abs/1305.2436.
[8] P. McCullagh and J. A. Nelder. Generalized Linear Models (Second Edition). London: Chapman & Hall, 1989.
[9] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M -estimators with decomposable regularizers. Statistical Science,
27(4):538?557, December 2012. See arXiv version for lemma/propositions cited here.
[10] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Universit Catholique de Louvain, Center for Operations Research and
Econometrics (CORE), 2007.
[11] Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Algorithms in Convex Programming. SIAM studies in applied and numerical mathematics. Society for Industrial and Applied
Mathematics, 1987.
[12] S. A. Vavasis. Complexity issues in global optimization: A survey. In Handbook of Global
Optimization, pages 27?41. Kluwer, 1995.
[13] C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. Annals of
Statistics, 38(2):894?942, 2010.
[14] C.-H. Zhang and T. Zhang. A general theory of concave regularization for high-dimensional
sparse estimation problems. Statistical Science, 27(4):576?593, 2012.
[15] H. Zou and R. Li. One-step sparse estimates in nonconcave penalized likelihood models.
Annals of Statistics, 36(4):1509?1533, 2008.
9
| 4904 |@word trial:1 illustrating:1 version:6 polynomial:1 stronger:2 norm:3 c0:7 simulation:6 solid:1 boundedness:1 zij:2 past:1 wainwrig:1 existing:1 err:12 comparing:2 subsequent:1 additive:4 numerical:1 plot:12 update:3 depict:5 implying:1 core:2 weierstrass:1 provides:4 iterates:2 clarified:1 successive:1 org:1 zhang:4 c2:2 become:1 expected:1 behavior:3 globally:1 xti:2 considering:1 becomes:1 begin:1 bounded:2 notation:3 panel:4 minimizes:1 unified:1 guarantee:7 berkeley:6 concave:2 universit:1 k2:24 grant:1 positive:2 local:26 consequence:3 establishing:1 abuse:1 chose:2 initialization:3 studied:1 fastest:1 graduate:1 statistically:2 unique:3 acknowledgment:1 practice:1 definite:1 subdifferentiable:1 procedure:1 universal:2 empirical:6 composite:15 interior:2 close:3 selection:4 risk:3 optimize:2 equivalent:1 deterministic:1 reviewer:1 dz:1 missing:3 yt:1 center:1 starting:1 independently:1 convex:13 survey:1 sharpness:1 decomposable:1 recovery:1 estimator:16 ity:1 population:7 coordinate:3 analogous:1 annals:6 target:2 suppose:9 play:1 modulo:1 programming:1 satisfying:2 econometrics:1 observed:3 role:1 region:1 decrease:4 rescaled:3 convexity:7 complexity:1 covariates:1 nesterov:3 po:1 indirect:1 k0:3 various:6 represented:1 regularizer:7 fast:2 describe:1 london:1 choosing:1 larger:3 statistic:8 cov:1 nondecreasing:1 noisy:1 sequence:1 differentiable:3 adaptation:1 remainder:1 combining:1 convergence:5 regularity:2 optimum:31 requirement:1 cluster:1 produce:3 illustrate:1 develop:2 depending:3 stat:17 z1n:4 b0:2 strong:5 predicted:4 direction:1 radius:1 centered:1 require:5 suffices:1 anonymous:1 opt:6 tighter:1 biological:1 proposition:1 c00:2 strictly:2 pl:2 mjw:1 hold:7 mm:1 around:3 hall:1 normal:1 exp:5 algorithmic:3 estimation:7 applicable:2 ditions:1 establishes:1 always:2 gaussian:2 modified:2 ck:1 cr:1 corollary:8 derived:1 focus:1 likelihood:3 contrast:3 industrial:1 cg:1 sense:1 helpful:1 bt:2 provably:1 arg:10 overall:1 issue:1 logn:2 special:1 marginal:1 never:1 nicely:2 chapman:1 identical:1 represents:1 broad:2 yu:1 nearly:1 t2:1 report:2 nonsmooth:1 simultaneously:1 opposing:1 ab:1 interest:1 severe:1 analyzed:1 extreme:1 light:1 pc:5 regularizers:11 nonincreasing:1 accurate:1 necessary:2 bq:1 iv:1 taylor:3 plotted:2 theoretical:4 rsc:8 earlier:2 cover:1 zn:1 ordinary:1 entry:1 eec:1 corrupted:4 cited:1 negahban:2 siam:1 probabilistic:1 squared:2 again:1 opposed:1 huang:1 possibly:2 worse:1 american:1 leading:1 li:4 potential:1 de:1 b2:2 includes:2 coefficient:1 satisfy:2 red:3 square:1 ni:6 accuracy:3 efficiently:1 correspond:2 yield:2 hunter:1 trajectory:3 corruption:1 casella:1 against:2 pp:1 dm:1 proof:2 recall:3 actually:1 higher:1 formulation:2 strongly:3 furthermore:1 until:1 hand:2 horizontal:1 logistic:11 behaved:2 grows:1 k22:3 verify:1 true:2 unbiased:1 hence:2 regularization:2 symmetric:1 nonzero:2 generalized:4 theoretic:1 demonstrate:1 rsm:1 meaning:1 specialized:1 multinomial:1 empirically:1 rl:2 discussed:1 slight:1 association:1 kluwer:1 smoothness:1 tuning:1 trivially:1 consistency:3 similarly:1 mathematics:2 moving:1 curvature:3 recent:1 perspective:1 optimizing:1 route:1 certain:1 nonconvex:29 verlag:1 inequality:4 yi:12 minimum:9 impose:1 converge:3 dashed:1 ii:1 multiple:1 technical:1 characterized:1 long:3 concerning:1 ravikumar:1 ensuring:2 prediction:1 involving:1 variant:1 regression:29 denominator:1 essentially:1 converging:1 arxiv:3 iteration:9 agarwal:1 c1:2 addition:1 background:1 whereas:3 fellowship:2 unlike:1 posse:1 december:1 nonconcave:2 seem:1 near:1 iii:1 fit:1 zi:6 lasso:9 regarding:1 whether:1 expression:1 penalty:19 loh:5 remark:3 covered:1 involve:1 detailed:1 proportionally:1 generate:1 specifies:1 http:1 xij:2 vavasis:1 nsf:2 dotted:1 sign:1 certify:1 track:1 blue:3 write:5 drawn:2 nonconvexity:6 subgradient:2 cone:1 sum:1 lehmann:1 throughout:2 family:2 scaling:3 bound:13 guaranteed:6 fan:2 nonnegative:3 oracle:1 constraint:3 argument:1 min:13 subgradients:1 separable:1 martin:1 department:2 according:1 scad:16 ball:2 hertz:1 across:1 wi:4 intuitively:1 restricted:6 glm:3 computationally:1 mcp:14 ln:27 equation:2 previously:1 discus:3 turn:1 agree:1 mechanism:1 count:6 serf:2 end:1 available:1 operation:1 observe:4 worthwhile:1 enforce:2 appearing:1 stepsize:2 rp:12 running:1 include:1 giving:1 k1:12 establish:5 society:1 implied:1 objective:8 question:1 quantity:3 print:1 usual:1 surrogate:1 exhibit:1 gradient:17 distance:1 thank:1 simulated:1 provable:1 minimizing:1 potentially:1 holding:1 negative:1 stated:2 zt:1 unknown:1 upper:1 vertical:1 observation:5 descent:14 nemirovskii:1 stack:1 pair:2 required:2 z1:1 optimized:1 california:2 louvain:1 beyond:1 below:1 program:11 tb:2 including:3 max:7 wainwright:7 suitable:2 rely:1 regularized:6 minimax:1 lk:1 axis:2 acknowledges:1 literature:1 geometric:1 l2:2 loss:23 proportional:2 proven:1 foundation:1 sufficient:1 consistent:2 minp:2 systematically:2 penalized:4 supported:1 catholique:1 side:3 weaker:1 understand:1 fall:1 taking:1 sparse:5 tolerance:2 curve:3 feedback:1 stuck:1 collection:1 commonly:2 coincide:1 projected:2 author:1 global:17 handbook:1 b1:2 assumed:1 nelder:1 xi:17 continuous:1 iterative:1 decomposes:1 nature:1 ca:2 obtaining:2 expansion:1 zou:1 main:5 linearly:1 ling:1 noise:4 edition:1 allowed:2 referred:1 precision:3 sub:2 exponential:1 lie:5 theorem:14 specific:1 xt:1 showing:3 k21:3 r2:1 intractable:1 exists:4 ez:1 contained:1 partially:1 applies:4 springer:1 corresponds:1 truth:2 satisfies:12 minimizer:1 ploh:1 conditional:1 viewed:1 goal:1 consequently:1 feasible:5 mccullagh:1 corrected:9 uniformly:1 lemma:1 shedding:1 highdimensional:1 support:1 latter:1 cumulant:1 |
4,315 | 4,905 | More data speeds up training time in learning
halfspaces over sparse vectors
Amit Daniely
Department of Mathematics
The Hebrew University
Jerusalem, Israel
Nati Linial
School of CS and Eng.
The Hebrew University
Jerusalem, Israel
Shai Shalev-Shwartz
School of CS and Eng.
The Hebrew University
Jerusalem, Israel
Abstract
The increased availability of data in recent years has led several authors to ask
whether it is possible to use data as a computational resource. That is, if more
data is available, beyond the sample complexity limit, is it possible to use the
extra examples to speed up the computation time required to perform the learning
task?
We give the first positive answer to this question for a natural supervised learning
problem ? we consider agnostic PAC learning of halfspaces over 3-sparse
vec
tors in {?1, 1, 0}n . This class is inefficiently learnable using O n/2 examples.
Our main contribution is a novel, non-cryptographic, methodology for establishing computational-statistical gaps, which allows us to show that, under a widely
believed assumption that refuting random 3CNFformulas is hard, it is impossible
to efficiently learn this class using only O n/2 examples.
We further show that
under stronger hardness assumptions, even O n1.499 /2 examples do not suffice. On the other
hand, we show a new algorithm that learns this class efficiently
? n2 /2 examples. This formally establishes the tradeoff between sample
using ?
and computational complexity for a natural supervised learning problem.
1
Introduction
In the modern digital period, we are facing a rapid growth of available datasets in science and
technology. In most computing tasks (e.g. storing and searching in such datasets), large datasets
are a burden and require more computation. However, for learning tasks the situation is radically
different. A simple observation is that more data can never hinder you from performing a task. If
you have more data than you need, just ignore it!
A basic question is how to learn from ?big data?. The statistical learning literature classically studies
questions like ?how much data is needed to perform a learning task?? or ?how does accuracy improve
as the amount of data grows?? etc. In the modern, ?data revolution era?, it is often the case that the
amount of data available far exceeds the information theoretic requirements. We can wonder whether
this, seemingly redundant data, can be used for other purposes. An intriguing question in this vein,
studied recently by several researchers ([Decatur et al., 1998, Servedio., 2000, Shalev-Shwartz et al.,
2012, Berthet and Rigollet, 2013, Chandrasekaran and Jordan, 2013]), is the following
Question 1: Are there any learning tasks in which more data, beyond the information theoretic barrier, can provably be leveraged to speed up computation
time?
The main contributions of this work are:
1
? Conditioning on the hardness of refuting random 3CNF formulas, we give the first example
of a natural supervised learning problem for which the answer to Question 1 is positive.
? To prove this, we present a novel technique to establish computational-statistical tradeoffs
in supervised learning problems. To the best of our knowledge, this is the first such a result
that is not based on cryptographic primitives.
Additional contributions are non trivial efficient
2 algorithms for learning halfspaces over 2-sparse
n
?
?
and 3-sparse vectors using O 2 and O n2 examples respectively.
The natural learning problem we consider is the task of learning the class of halfspaces over k-sparse
vectors. Here, the instance space is the space of k-sparse vectors,
Cn,k = {x ? {?1, 1, 0}n | |{i | xi 6= 0}| ? k} ,
and the hypothesis class is halfspaces over k-sparse vectors, namely
Hn,k = {hw,b : Cn,k ? {?1} | hw,b (x) = sign(hw, xi + b), w ? Rn , b ? R} ,
where h?, ?i denotes the standard inner product in Rn .
We consider the standard setting of agnostic PAC learning, which models the realistic scenario
where the labels are not necessarily fully determined by some hypothesis from Hn,k . Note that in
the realizable case, i.e. when some hypothesis from Hn,k has zero error, the problem of learning
halfspaces is easy even over Rn .
In addition, we allow improper learning (a.k.a. representation independent learning), namely, the
learning algorithm is not restricted to output a hypothesis from Hn,k , but only should output a hypothesis whose error is not much larger than the error of the best hypothesis in Hn,k . This gives the
learner a lot of flexibility in choosing an appropriate representation of the problem. This additional
freedom to the learner makes it much harder to prove lower bounds in this model. Concretely, it is
not clear how to use standard reductions from NP hard problems in order to establish lower bounds
for improper learning (moreover, Applebaum et al. [2008] give evidence that such simple reductions
do not exist).
The classes Hn,k and similar classes have been studied by several authors (e.g. Long. and Servedio
[2013]). They naturally arise in learning scenarios in which the set of all possible features is very
large, but each example has only a small number of active features. For example:
? Predicting an advertisement based on a search query: Here, the possible features of each
instance are all English words, whereas the active features are only the set of words given
in the query.
? Learning Preferences [Hazan et al., 2012]: Here, we have n players. A ranking of the
players is a permutation ? : [n] ? [n] (think of ?(i) as the rank of the i?th player). Each
ranking induces a preference h? over the ordered pairs, such that h? (i, j) = 1 iff i is ranked
higher that j. Namely,
1
?(i) > ?(j)
h? (i, j) =
?1 ?(i) < ?(j)
The objective here is to learn the class, Pn , of all possible preferences. The problem of
learning preferences is related to the problem of learning Hn,2 : if we associate each pair
(i, j) with the vector in Cn,2 whose i?th coordinate is 1 and whose j?th coordinate is ?1,
it is not hard to see that Pn ? Hn,2 : for every ?, h? = hw,0 for the vector w ? Rn ,
given by wi = ?(i). Therefore, every upper bound for Hn,2 implies an upper bound for
Pn , while every lower bound for Pn implies a lower bound for Hn,2 . Since VC(Pn ) = n
and VC(Hn,2 ) = n + 1, the information theoretic barrier to learn these classesis ? n2 .
In Hazan et al. [2012] it was shown that Pn can be efficiently learnt using O
examples. In section 4, we extend this result to Hn,2 .
n log3 (n)
2
We will show a positive answer to Question 1 for the class Hn,3 . To do so, we show1 the following:
1
In fact, similar results hold for every constant k ? 3. Indeed, since Hn,3 ? Hn,k for every k ? 3, it is
trivial that item 3 below holds for every k ? 3. The upper bound given in item 1 holds for every k. For item 2,
2
1. Ignoring computational issues, it is possible to learn the class Hn,3 using O
n
2
examples.
2. It is also
to efficiently learn Hn,3 if we are provided with a larger training set (of
possible
2
n
? 2 ). This is formalized in Theorem 3.1.
size ?
3. It is impossible
to efficiently learn Hn,3 , if we are only provided with a training set of
size O n2 under Feige?s assumption regarding the hardness of refuting random 3CNF
formulas [Feige, 2002]. Furthermore, for every
? ? [0, 0.5), it is impossible to learn
efficiently with a training set of size O
is formalized in Theorem 4.1.
n1+?
2
under a stronger hardness assumption. This
A graphical illustration of our main results is given below:
runtime
2O(n)
> poly(n)
nO(1)
examples
n
1.5
2
n
n
The proof of item 1 above is easy ? simply note that Hn,3 has VC dimension n + 1.
Item 2 is proved in section 4, relying on the results of Hazan et al. [2012]. We note, however, that
a weaker result, that still suffices for answering Question 1 in the affirmative, can be proven using
a naive improper learning
algorithm. In particular, we show below how to learn Hn,3 efficiently
with a sample of ?
n3
2
examples. The idea is to replace the class Hn,3 with the class {?1}Cn,3
containing all functions from Cn,3 to {?1}. Clearly, this class contains Hn,3 . In addition, we
can efficiently find a function f that minimizes the empirical training error over a training set S
as follows: For every x ? Cn,k , if x does not appear at all in the training set we will set f (x)
arbitrarily to 1. Otherwise, we will set f (x) to be the majority of the labels in the training set
that correspond to x. Finally, note that the VC dimension of {?1}Cn,3 is smaller than n3 (since
|Cn,3 | <
n3 ). Hence, standard generalization results (e.g. Vapnik [1995]) implies that a training set
size of ?
n3
2
suffices for learning this class.
Item 3 is shown in section 3 by presenting a novel technique for establishing statisticalcomputational tradeoffs.
The class Hn,2 . Our main result gives a positive answer to Question 1 for the task of improperly learning Hn,k for k ? 3. A natural question is what happens for k = 2 and k = 1. Since
VC(Hn,1 ) = VC(Hn,2 ) = n + 1, the information theoretic barrier for learning these classes is
? n2 . In section
4, we prove that Hn,2 (and, consequently, Hn,1 ? Hn,2 ) can be learnt using
3
O n log2 (n) examples, indicating that significant computational-statistical tradeoffs start to manifest themselves only for k ? 3.
1.1
Previous approaches, difficulties, and our techniques
[Decatur et al., 1998] and [Servedio., 2000] gave positive answers to Question 1 in the realizable
PAC learning model. Under cryptographic assumptions, they showed that there exist binary learning problems, in which more data can provably be used to speed up training time. [Shalev-Shwartz
et al., 2012] showed a similar result for the agnostic PAC learning model. In all of these papers, the
main idea is to construct a hypothesis class based on a one-way function. However, the constructed
k
it is not hard to show that Hn,k can be learnt using a sample of ? n2 examples by a naive improper learning
algorithm, similar to the algorithm we describe in this section for k = 3.
3
classes are of a very synthetic nature, and are of almost no practical interest. This is mainly due
to the construction technique which is based on one way functions. In this work, instead of using
cryptographic assumptions, we rely on the hardness of refuting random 3CNF formulas. The simplicity and flexibility of 3CNF formulas enable us to derive lower bounds for natural classes such
as halfspaces.
Recently, [Berthet and Rigollet, 2013] gave a positive answer to Question 1 in the context of unsupervised learning. Concretely, they studied the problem of sparse PCA, namely, finding a sparse vector
that maximizes the variance of an unsupervised data. Conditioning on the hardness of the planted
clique problem, they gave a positive answer to Question 1 for sparse PCA. Our work, as well as
the previous work of Decatur et al. [1998], Servedio. [2000], Shalev-Shwartz et al. [2012], studies
Question 1 in the supervised learning setup. We emphasize that unsupervised learning problems
are radically different than supervised learning problems in the context of deriving lower bounds.
The main reason for the difference is that in supervised learning problems, the learner is allowed
to employ improper learning, which gives it a lot of power in choosing an adequate representation of the data. For example, the upper bound we have derived for the class of sparse halfspaces
switched from representing hypotheses as halfspaces to representation of hypotheses as tables over
Cn,3 , which made the learning problem easy from the computational perspective. The crux of the
difficulty in constructing lower bounds is due to this freedom of the learner in choosing a convenient
representation. This difficulty does not arise in the problem of sparse PCA detection, since there
the learner must output a good sparse vector. Therefore, it is not clear whether the approach given
in [Berthet and Rigollet, 2013] can be used to establish computational-statistical gaps in supervised
learning problems.
2
Background and notation
For hypothesis class H ? {?1}X and a set Y ? X, we define the restriction of H to Y by
H|Y = {h|Y | h ? H}. We denote by J = Jn the all-ones n ? n matrix. We denote the j?th vector
in the standard basis of Rn by ej .
2.1
Learning Algorithms
For h : Cn,3 ? {?1} and a distribution D on Cn,3 ? {?1} we denote the error of h w.r.t. D
by ErrD (h) = Pr(x,y)?D (h(x) 6= y). For H ? {?1}Cn,3 we denote the error of H w.r.t. D by
m
ErrD (H) = minh?H ErrD (h). For a sample S ? (Cn,3 ? {?1}) we denote by ErrS (h) (resp.
ErrS (H)) the error of h (resp. H) w.r.t. the empirical distribution induces by the sample S.
m
A learning algorithm, L, receives a sample S ? (Cn,3 ? {?1}) and return a hypothesis L(S) :
Cn,3 ? {?1}. We say that L learns Hn,3 using m(n, ) examples if,2 for every distribution D on
Cn,3 ? {?1} and a sample S of more than m(n, ) i.i.d. examples drawn from D,
1
10
The algorithm L is efficient if it runs in polynomial time in the sample size and returns a hypothesis
that can be evaluated in polynomial time.
Pr (ErrD (L(S)) > ErrD (H3,n ) + ) <
S
2.2
Refuting random 3SAT formulas
We frequently view a boolean assignment to variables x1 , . . . , xn as a vector in Rn . It is convenient,
therefore, to assume that boolean variables take values in {?1} and to denote negation by ? ? ?
(instead of the usual ???). An n-variables 3CNF clause is a boolean formula of the form
C(x) = (?1)j1 xi1 ? (?1)j2 xi2 ? (?1)j1 xi3 , x ? {?1}n
An n-variables 3CNF formula is a boolean formula of the form
?(x) = ?m
i=1 Ci (x) ,
2
For simplicity, we require the algorithm to succeed with probability of at least 9/10. This can be easily
amplified to probability of at least 1 ? ?, as in the usual definition of agnostic PAC learning, while increasing
the sample complexity by a factor of log(1/?).
4
where every Ci is a 3CNF clause. Define the value, Val(?), of ? as the maximal fraction of clauses
that can be simultaneously satisfied. If Val(?) = 1, we say the ? is satisfiable. By 3CNFn,m we
denote the set of 3CNF formulas with n variables and m clauses.
Refuting random 3CNF formulas has been studied extensively (see e.g. a special issue of TCS
Dubios et al. [2001]). It is known that for large enough ? (? = 6 will suffice) a random formula in
3CNFn,?n is not satisfiable with probability 1 ? o(1). Moreover, for every 0 ? < 14 , and a large
enough ? = ?(), the value of a random formula 3CNFn,?n is ? 1 ? with probability 1 ? o(1).
The problem of refuting random 3CNF concerns efficient algorithms that provide a proof that a
random 3CNF is not satisfiable, or far from being satisfiable. This can be thought of as a game
between an adversary and an algorithm. The adversary should produce a 3CNF-formula. It can
either produce a satisfiable formula, or, produce a formula uniformly at random. The algorithm
should identify whether the produced formula is random or satisfiable.
Formally, let ? : N ? N and 0 ? < 14 . We say that an efficient algorithm, A, -refutes random
3CNF with ratio ? if its input is ? ? 3CNFn,n?(n) , its output is either ?typical? or ?exceptional?
and it satisfies:
? Soundness: If Val(?) ? 1 ? , then
Pr
Rand. coins of A
(A(?) = ?exceptional?) ?
3
4
? Completeness: For every n,
(A(?) = ?typical?) ? 1 ? o(1)
Pr
Rand. coins of A, ??Uni(3CNFn,n?(n) )
By a standard repetition argument, the probability of 34 can be amplified to 1 ? 2?n , while efficiency
is preserved. Thus, given such an (amplified) algorithm, if A(?) = ?typical?, then with confidence
of 1 ? 2?n we know that Val(?) < 1 ? . Since for random ? ? 3CNFn,n?(n) , A(?) = ?typical?
with probability 1 ? o(1), such an algorithm provides, for most 3CNF formulas a proof that their
value is less that 1 ? .
Note that an algorithm that -refutes random 3CNF with ratio ? also 0 -refutes random 3CNF with
ratio ? for every 0 ? 0 ? . Thus, the task of refuting random 3CNF?s gets easier as gets smaller.
Most of the research concerns the case = 0. Here, it is not hard to see that the task is getting easier
as ? grows.?The best known algorithm [Feige and Ofek, 2007] 0-refutes random 3CNF with ratio
?(n) = ?( n). In Feige [2002] it was conjectured that for constant ? no efficient algorithm can
provide a proof that a random 3CNF is not satisfiable:
Conjecture 2.1 (R3SAT hardness assumption ? [Feige, 2002]). For every > 0 and for every large
enough integer ? > ?0 () there exists no efficient algorithm that -refutes random 3CNF formulas
with ratio ?.
In fact, for all we know, the following conjecture may be true for every 0 ? ? ? 0.5.
Conjecture 2.2 (?-R3SAT hardness assumption). For every > 0 and for every integer ? > ?0 ()
there exists no efficient algorithm that -refutes random 3CNF with ratio ? ? n? .
Note that Feige?s conjecture is equivalent to the 0-R3SAT hardness assumption.
3
Lower bounds for learning Hn,3
Theorem 3.1 (main). Let 0 ? ? ? 0.5. If the ?-R3SAT hardness assumption (conjecture 2.2) is
1+?
true, then there exists no efficient learning algorithm that learns the class Hn,3 using O n 2
examples.
In the proof of Theorem 3.1 we rely on the validity of a conjecture, similar to conjecture 2.2 for 3variables majority formulas. Following an argument from [Feige, 2002] (Theorem 3.2) the validity
of the conjecture on which we rely for majority formulas follows the validity of conjecture 2.2.
5
Define
?(x1 , x2 , x3 ) ? {?1}3 , MAJ(x1 , x2 , x3 ) := sign(x1 + x2 + x3 )
An n-variables 3MAJ clause is a boolean formula of the form
C(x) = MAJ((?1)j1 xi1 , (?1)j2 xi2 , (?1)j1 xi3 ), x ? {?1}n
An n-variables 3MAJ formula is a boolean formula of the form
?(x) = ?m
i=1 Ci (x)
where the Ci ?s are 3MAJ clauses. By 3MAJn,m we denote the set of 3MAJ formulas with n variables
and m clauses.
Theorem 3.2 ([Feige, 2002]). Let 0 ? ? ? 0.5. If the ?-R3SAT hardness assumption is true, then
for every > 0 and for every large enough integer ? > ?0 () there exists no efficient algorithm
with the following properties.
? Its input is ? ? 3MAJn,?n1+? , and its output is either ?typical? or ?exceptional?.
? If Val(?) ?
3
4
? , then
Pr
Rand. coins of A
(A(?) = ?exceptional?) ?
3
4
? For every n,
(A(?) = ?typical?) ? 1 ? o(1)
Pr
Rand. coins of A, ??Uni(3MAJn,?n1+? )
Next, we prove Theorem 3.1. In fact, we will prove a slightly stronger result. Namely, define
d
d
the subclass Hn,3
? Hn,3 , of homogenous halfspaces with binary weights, given by Hn,3
=
n
{hw,0 | w ? {?1} }. As we show, underthe ?-R3SAT
hardness
assumption,
it
is
impossible
to
efficiently learn this subclass using only O
n1+?
2
examples.
Proof idea: We will reduce the task of refuting random 3MAJ formulas with linear number of
d
clauses to the task of (improperly) learning Hn,3
with linear number of samples. The first step will be
to construct a transformation that associates every 3MAJ clause with two examples in Cn,3 ? {?1},
d
and every assignment with a hypothesis in Hn,3
. As we will show, the hypothesis corresponding to
an assignment ? is correct on the two examples corresponding to a clause C if and only if ? satisfies
C. With that interpretation at hand, every 3MAJ formula ? can be thought of as a distribution D?
on Cn,3 ? {?1}, which is the empirical distribution induced by ??s clauses. It holds furthermore
d
that ErrD? (Hn,3
) = 1 ? Val(?).
d
Suppose now that we are given an efficient learning algorithm for Hn,3
, that uses ? n2 examples,
for some ? > 0. To construct an efficient algorithm for refuting 3MAJ-formulas, we simply feed
n
the learning algorithm with ? 0.01
2 examples drawn from D? and answer ?exceptional? if the error
of the hypothesis returned by the algorithm is small. If ? is (almost) satisfiable, the algorithm is
guaranteed to return a hypothesis with a small error. On the other hand, if ? is far from being
d
satisfiable, ErrD? (Hn,3
) is large. If the learning algorithm is proper, then it must return a hypothesis
d
from Hn,3 and therefore it would necessarily return a hypothesis with a large error. This argument
d
can be used to show that, unless N P = RP , learning Hn,3
with a proper efficient algorithm is
impossible. However, here we want to rule out improper algorithms as well.
The crux of the construction is that if ? is random, no algorithm (even improper and even inefficient)
can return a hypothesis with a small error. The reason for that is that since the sample provided
n
to the algorithm consists of only ? 0.01
2 samples, the algorithm won?t see most of ??s clauses,
and, consequently, the produced hypothesis h will be independent of them. Since these clauses are
random, h is likely to err on about half of them, so that ErrD? (h) will be close to half!
To summarize we constructed an efficient algorithm with the following properties: if ? is almost
satisfiable, the algorithm will return a hypothesis with a small error, and then we will declare ?exceptional?, while for random ?, the algorithm will return a hypothesis with a large error, and we will
declare ?typical?.
6
Our construction crucially relies on the restriction to learning algorithm with a small sample complexity. Indeed, if the learning algorithm obtains more than n1+? examples, then it will see most
of ??s clauses, and therefore it might succeed in ?learning? even when the source of the formula is
random. Therefore, we will declare ?exceptional? even when the source is random.
Proof. (of theorem 3.1) Assume by way of contradiction that the ?-R3SAT hardness assumption
1+? is
true and yet there exists an efficient learning algorithm that learns the class Hn,3 using O n 2
1
examples. Setting = 100
, we conclude that there exists an efficient algorithm L and a constant
? > 0 such that given a sample S of more than ? ? n1+? examples drawn from a distribution D on
Cn,3 ? {?1}, returns a classifier L(S) : Cn,3 ? {?1} such that
? L(S) can be evaluated efficiently.
? W.p. ?
3
4
over the choice of S, ErrD (L(S)) ? ErrD (Hn,3 ) +
1
100 .
1
Fix ? large enough such that ? > 100? and the conclusion of Theorem 3.2 holds with = 100
. We
will construct an algorithm, A, contradicting Theorem 3.2. On input ? ? 3MAJn,?n1+? consisting
of the 3MAJ clauses C1 , . . . , C?n1+? , the algorithm A proceeds as follows
1. Generate a sample S consisting of ?n1+? examples as follows. For every clause, Ck =
MAJ((?1)j1 xi1 , (?1)j2 xi2 , (?1)j3 xi3 ), generate an example (xk , yk ) ? Cn,3 ? {?1} by
choosing b ? {?1} at random and letting
!
3
X
(xk , yk ) = b ?
(?1)jl eil , 1 ? Cn,3 ? {?1} .
l=1
For example, if n = 6, the clause is MAJ(?x2 , x3 , x6 ) and b = ?1, we generate the
example
((0, 1, ?1, 0, 0, ?1), ?1)
1+?
2. Choose a sample S1 consisting of ?n
100
(with repetitions) examples from S.
? ? ? n1+? examples by choosing at random
3. Let h = L(S1 ). If ErrS (h) ? 38 , return ?exceptional?. Otherwise, return ?typical?.
We claim that A contradicts Theorem 3.2. Clearly, A runs in polynomial time. It remains to show
that
? If Val(?) ?
3
4
?
1
100 ,
then
Pr
Rand. coins of A
(A(?) = ?exceptional?) ?
3
4
? For every n,
(A(?) = ?typical?) ? 1 ? o(1)
Pr
Rand. coins of A, ??Uni(3MAJn,?n1+? )
Assume first that ? ? 3MAJn,?n1+? is chosen at random. Given the sample S1 , the sample S2 :=
S \ S1 is a sample of |S2 | i.i.d. examples which are independent from the sample S1 , and hence also
from h = L(S1 ). Moreover, for every example (xk , yk ) ? S2 , yk is a Bernoulli random variable
with parameter 21 which is independent of xk . To see that, note that an example whose instance is xk
can be generated by exactly two clauses ? one corresponds to yk = 1, while the other corresponds
to yk = ?1 (e.g., the instance (1, ?1, 0, 1) can be generated from the clause MAJ(x1 , ?x2 , x4 ) and
b = 1 or the clause MAJ(?x1 , x2 , ?x4 ) and b = ?1). Thus, given the instance xk , the probability
that yk = 1 is 21 , independent of xk .
7
1
It follows that ErrS2 (h) is an average of at least 1 ? 100
?n1+? independent Bernoulli random
1
variable. By Chernoff?s bound, with probability ? 1 ? o(1), ErrS2 (h) > 21 ? 100
. Thus,
1
1
1
3
1
ErrS2 (h) ? 1 ?
?
?
>
ErrS (h) ? 1 ?
100
100
2 100
8
And the algorithm will output ?typical?.
1
Assume now that Val(?) ? 43 ? 100
and let ? ? {?1}n be an assignment that indicates that. Let
? ? Hn,3 be the hypothesis ?(x) = sign (h?, xi). It can be easily checked that ?(xk ) = yk if and
1
, it follows that
only if ? satisfies Ck . Since Val(?) ? 43 ? 100
ErrS (?) ?
1
1
+
.
4 100
Thus,
ErrS (Hn,3 ) ?
By the choice of L, with probability ? 1 ?
1
4
ErrS (h) ?
1
1
+
.
4 100
= 43 ,
1
1
1
3
+
+
<
4 100 100
8
and the algorithm will return ?exceptional?.
4
Upper bounds for learning Hn,2 and Hn,3
The following theorem derives upper bounds for learning Hn,2 and Hn,3 . Its proof relies on results
from Hazan et al. [2012] about learning ?-decomposable matrices, and due to the lack of space is
given in the appendix.
Theorem 4.1.
3
? There exists an efficient algorithm that learns Hn,2 using O n log2 (n) examples
? There exists an efficient algorithm that learns Hn,3 using O
5
n2 log3 (n)
2
examples
Discussion
We formally established a computational-sample complexity tradeoff for the task of (agnostically
and improperly) PAC learning of halfspaces over 3-sparse vectors. Our proof of the lower bound
relies on a novel, non cryptographic, technique for establishing such tradeoffs. We also derive a new
non-trivial upper bound for this task.
Open questions. An obvious open question is to close the gap between
the lower
and upper bounds.
? n1.5
We conjecture that Hn,3 can be learnt efficiently using a sample of O
examples. Also, we
2
believe that our new proof technique can be used for establishing computational-sample complexity
tradeoffs for other natural learning problems.
Acknowledgements: Amit Daniely is a recipient of the Google Europe Fellowship in Learning
Theory, and this research is supported in part by this Google Fellowship. Nati Linial is supported
by grants from ISF, BSF and I-Core. Shai Shalev-Shwartz is supported by the Israeli Science Foundation grant number 590-10.
References
Benny Applebaum, Boaz Barak, and David Xiao. On basing lower-bounds for learning on worstcase assumptions. In Foundations of Computer Science, 2008. FOCS?08. IEEE 49th Annual IEEE
Symposium on, pages 211?220. IEEE, 2008.
8
Quentin Berthet and Philippe Rigollet. Complexity theoretic lower bounds for sparse principal
component detection. In COLT, 2013.
Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line
learning algorithms. IEEE Transactions on Information Theory, 50:2050?2057, 2001.
Venkat Chandrasekaran and Michael I. Jordan. Computational and statistical tradeoffs via convex
relaxation. Proceedings of the National Academy of Sciences, 2013.
S. Decatur, O. Goldreich, and D. Ron. Computational sample complexity. SIAM Journal on Computing, 29, 1998.
O. Dubios, R. Monasson, B. Selma, and R. Zecchina (Guest Editors). Phase Transitions in Combinatorial Problems. Theoretical Computer Science, Volume 265, Numbers 1-2, 2001.
U. Feige. Relations between average case complexity and approximation complexity. In STOC,
pages 534?543, 2002.
Uriel Feige and Eran Ofek. Easily refutable subformulas of large random 3cnf formulas. Theory of
Computing, 3(1):25?43, 2007.
E. Hazan, S. Kale, and S. Shalev-Shwartz. Near-optimal algorithms for online matrix prediction. In
COLT, 2012.
P. Long. and R. Servedio. Low-weight halfspaces for sparse boolean vectors. In ITCS, 2013.
R. Servedio. Computational sample complexity and attribute-efficient learning. J. of Comput. Syst.
Sci., 60(1):161?178, 2000.
Shai Shalev-Shwartz, Ohad Shamir, and Eran Tromer. Using more data to speed-up training time.
In AISTATS, 2012.
V.N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
9
| 4905 |@word polynomial:3 stronger:3 open:2 crucially:1 eng:2 harder:1 reduction:2 contains:1 err:1 yet:1 intriguing:1 must:2 realistic:1 j1:5 show1:1 half:2 item:6 xk:8 core:1 provides:1 completeness:1 ron:1 preference:4 constructed:2 symposium:1 focs:1 prove:5 consists:1 indeed:2 hardness:13 rapid:1 themselves:1 frequently:1 relying:1 eil:1 increasing:1 provided:3 moreover:3 suffice:2 maximizes:1 agnostic:4 notation:1 israel:3 what:1 minimizes:1 affirmative:1 finding:1 transformation:1 zecchina:1 every:28 subclass:2 growth:1 runtime:1 exactly:1 classifier:1 grant:2 appear:1 positive:7 declare:3 limit:1 era:1 establishing:4 might:1 studied:4 practical:1 x3:4 empirical:3 thought:2 convenient:2 word:2 confidence:1 get:2 close:2 context:2 impossible:5 restriction:2 equivalent:1 jerusalem:3 primitive:1 kale:1 convex:1 formalized:2 simplicity:2 decomposable:1 contradiction:1 rule:1 bsf:1 deriving:1 quentin:1 searching:1 coordinate:2 resp:2 construction:3 suppose:1 shamir:1 us:1 hypothesis:23 associate:2 vein:1 improper:7 benny:1 halfspaces:12 yk:8 complexity:11 hinder:1 linial:2 efficiency:1 learner:5 basis:1 easily:3 goldreich:1 describe:1 query:2 shalev:7 choosing:5 whose:4 widely:1 larger:2 say:3 otherwise:2 ability:1 soundness:1 think:1 seemingly:1 online:1 product:1 maximal:1 j2:3 iff:1 flexibility:2 amplified:3 academy:1 getting:1 requirement:1 produce:3 derive:2 h3:1 school:2 c:2 implies:3 correct:1 attribute:1 vc:6 enable:1 require:2 crux:2 suffices:2 generalization:2 fix:1 hold:5 claim:1 tor:1 purpose:1 label:2 combinatorial:1 repetition:2 exceptional:10 establishes:1 basing:1 clearly:2 ck:2 pn:6 ej:1 claudio:1 derived:1 rank:1 bernoulli:2 mainly:1 indicates:1 realizable:2 relation:1 provably:2 issue:2 colt:2 special:1 homogenous:1 construct:4 never:1 chernoff:1 x4:2 unsupervised:3 np:1 employ:1 modern:2 simultaneously:1 national:1 phase:1 consisting:3 n1:15 negation:1 freedom:2 detection:2 interest:1 ohad:1 unless:1 theoretical:1 increased:1 instance:5 boolean:7 assignment:4 daniely:2 wonder:1 answer:8 learnt:4 synthetic:1 siam:1 xi1:3 michael:1 satisfied:1 cesa:1 containing:1 leveraged:1 hn:53 choose:1 classically:1 inefficient:1 return:12 syst:1 availability:1 applebaum:2 ranking:2 view:1 lot:2 hazan:5 start:1 satisfiable:10 errd:10 shai:3 contribution:3 accuracy:1 variance:1 efficiently:11 correspond:1 identify:1 itcs:1 produced:2 researcher:1 checked:1 definition:1 servedio:6 obvious:1 naturally:1 proof:10 proved:1 ask:1 manifest:1 knowledge:1 feed:1 higher:1 supervised:8 x6:1 methodology:1 rand:6 evaluated:2 furthermore:2 just:1 uriel:1 hand:3 receives:1 lack:1 google:2 believe:1 grows:2 validity:3 true:4 hence:2 game:1 won:1 presenting:1 theoretic:5 novel:4 recently:2 rigollet:4 clause:20 conditioning:2 volume:1 jl:1 extend:1 interpretation:1 isf:1 significant:1 vec:1 mathematics:1 ofek:2 europe:1 etc:1 nicolo:1 xi3:3 recent:1 showed:2 perspective:1 conjectured:1 scenario:2 binary:2 arbitrarily:1 errs:7 additional:2 gentile:1 period:1 redundant:1 exceeds:1 believed:1 long:2 j3:1 prediction:1 basic:1 c1:1 preserved:1 addition:2 fellowship:2 whereas:1 background:1 want:1 source:2 extra:1 induced:1 jordan:2 integer:3 near:1 easy:3 enough:5 gave:3 agnostically:1 inner:1 regarding:1 cn:22 idea:3 tradeoff:8 reduce:1 whether:4 pca:3 improperly:3 returned:1 cnf:23 adequate:1 clear:2 amount:2 extensively:1 induces:2 generate:3 exist:2 sign:3 drawn:3 decatur:4 relaxation:1 fraction:1 year:1 run:2 you:3 refutable:1 almost:3 chandrasekaran:2 appendix:1 bound:20 guaranteed:1 annual:1 alex:1 n3:4 x2:6 speed:5 argument:3 performing:1 conjecture:10 department:1 feige:10 smaller:2 slightly:1 contradicts:1 wi:1 happens:1 s1:6 restricted:1 pr:8 resource:1 remains:1 xi2:3 needed:1 know:2 letting:1 refuting:10 available:3 appropriate:1 coin:6 monasson:1 rp:1 jn:1 inefficiently:1 recipient:1 denotes:1 graphical:1 amit:2 establish:3 objective:1 question:16 planted:1 eran:2 usual:2 sci:1 majority:3 trivial:3 reason:2 illustration:1 ratio:6 hebrew:3 guest:1 setup:1 stoc:1 cryptographic:5 proper:2 perform:2 bianchi:1 upper:8 observation:1 datasets:3 minh:1 philippe:1 situation:1 rn:6 david:1 namely:5 required:1 pair:2 established:1 israeli:1 beyond:2 adversary:2 proceeds:1 below:3 summarize:1 power:1 natural:7 ranked:1 difficulty:3 predicting:1 rely:3 representing:1 improve:1 technology:1 maj:15 naive:2 literature:1 acknowledgement:1 nati:2 val:9 fully:1 permutation:1 proven:1 facing:1 digital:1 foundation:2 switched:1 xiao:1 editor:1 storing:1 supported:3 english:1 allow:1 weaker:1 barak:1 barrier:3 sparse:16 dimension:2 xn:1 transition:1 author:2 berthet:4 concretely:2 made:1 far:3 log3:2 transaction:1 emphasize:1 ignore:1 uni:3 obtains:1 boaz:1 clique:1 active:2 sat:1 conclude:1 xi:3 shwartz:7 search:1 table:1 learn:10 nature:2 ignoring:1 necessarily:2 poly:1 constructing:1 aistats:1 main:7 big:1 s2:3 arise:2 n2:6 contradicting:1 allowed:1 x1:6 venkat:1 refutes:6 comput:1 answering:1 advertisement:1 learns:6 hw:5 formula:30 theorem:13 pac:6 revolution:1 learnable:1 evidence:1 concern:2 burden:1 exists:8 derives:1 vapnik:2 ci:4 gap:3 easier:2 tc:1 led:1 simply:2 likely:1 ordered:1 conconi:1 springer:1 radically:2 corresponds:2 satisfies:3 relies:3 worstcase:1 succeed:2 consequently:2 replace:1 hard:5 determined:1 typical:10 uniformly:1 principal:1 player:3 indicating:1 formally:3 |
4,316 | 4,906 | Convex Calibrated Surrogates for Low-Rank Loss
Matrices with Applications to Subset Ranking Losses
Harish G. Ramaswamy
Shivani Agarwal
Computer Science & Automation Computer Science & Automation
Indian Institute of Science
Indian Institute of Science
harish [email protected]
[email protected]
Ambuj Tewari
Statistics and EECS
University of Michigan
[email protected]
Abstract
The design of convex, calibrated surrogate losses, whose minimization entails
consistency with respect to a desired target loss, is an important concept to have
emerged in the theory of machine learning in recent years. We give an explicit
construction of a convex least-squares type surrogate loss that can be designed to
be calibrated for any multiclass learning problem for which the target loss matrix
has a low-rank structure; the surrogate loss operates on a surrogate target space
of dimension at most the rank of the target loss. We use this result to design
convex calibrated surrogates for a variety of subset ranking problems, with target
losses including the precision@q, expected rank utility, mean average precision,
and pairwise disagreement.
1
Introduction
There has been much interest in recent years in understanding consistency properties of learning
algorithms ? particularly algorithms that minimize a surrogate loss ? for a variety of ?nite-output
learning problems, including binary classi?cation, multiclass classi?cation, multi-label classi?cation, subset ranking, and others [1?17]. For algorithms minimizing a surrogate loss, the question
of consistency reduces to the question of calibration of the surrogate loss with respect to the target
loss of interest [5?7, 16]; in general, one is interested in convex surrogates that can be minimized
ef?ciently. In particular, the existence (and lack thereof) of convex calibrated surrogates for various
subset ranking problems, with target losses including for example the discounted cumulative gain
(DCG), mean average precision (MAP), mean reciprocal rank (MRR), and pairwise disagreement
(PD), has received signi?cant attention recently [9, 11?13, 15?17].
In this paper, we develop a general result which allows us to give an explicit convex, calibrated
surrogate de?ned on a low-dimensional surrogate space for any ?nite-output learning problem for
which the loss matrix has low rank. Recently, Ramaswamy and Agarwal [16] showed the existence
of such surrogates, but their result involved an unwieldy surrogate space, and moreover did not give
an explicit, usable construction for the mapping needed to transform predictions in the surrogate
space back to the original prediction space. Working in the same general setting as theirs, we give
an explicit construction that leads to a simple least-squares type surrogate. We then apply this
result to obtain several new results related to subset ranking. Speci?cally, we ?rst obtain calibrated,
score-based surrogates for the Precision@q loss, which includes the winner-take-all (WTA) loss as
a special case, and the expected rank utility (ERU) loss; to the best of our knowledge, consistency
with respect to these losses has not been studied previously in the literature. When there are r
documents to be ranked for each query, the score-based surrogates operate on an r-dimensional
surrogate space. We then turn to the MAP and PD losses, which are both widely used in practice, and
for which it has been shown that no convex score-based surrogate can be calibrated for all probability
distributions [11, 15, 16]. For the PD loss, Duchi et al. [11] gave certain low-noise conditions on the
probability distribution under which a convex, calibrated score-based surrogate could be designed;
1
we are unaware of such a result for the MAP loss. A straightforward application of our low-rank
result to these losses yields convex calibrated surrogates de?ned on O(r2 )-dimensional surrogate
spaces, but in both cases, the mapping needed to transform back to predictions in the original space
involves solving a computationally hard problem. Inspired by these surrogates, we then give a
convex score-based surrogate with an ef?cient mapping that is calibrated with respect to MAP under
certain conditions on the probability distribution; this is the ?rst such result for the MAP loss that
we are aware of. We also give a family of convex score-based surrogates calibrated with the PD
loss under certain noise conditions, generalizing the surrogate and conditions of Duchi et al. [11].
Finally, we give an ef?cient mapping for the O(r2 )-dimensional surrogate for the PD loss, and show
that this leads to a convex surrogate calibrated with the PD loss under a more general condition, i.e.
over a larger set of probability distributions, than those associated with the score-based surrogates.
Paper outline. We start with some preliminaries and background in Section 2. Section 3 gives our
primary result, namely an explicit convex surrogate calibrated for low-rank loss matrices, de?ned on
a surrogate space of dimension at most the rank of the matrix. Sections 4?7 then give applications
of this result to the Precision@q, ERU, MAP, and PD losses, respectively. All proofs not included
in the main text can be found in the appendix.
2
Preliminaries and Background
Setup. We work in the same general setting as that of Ramaswamy and Agarwal [16]. There is an
instance space X , a ?nite set of class labels Y = [n] = {1, . . . , n}, and a ?nite set of target labels
(possible predictions) T = [k] = {1, . . . , k}. Given training examples (X1 , Y1 ), . . . , (Xm , Ym )
drawn i.i.d. from a distribution D on X ?Y, the goal is to learn a prediction model h : X ?T . Often,
T = Y, but this is not always the case (for example, in the subset ranking problems we consider,
the labels in Y are typically relevance vectors or preference graphs over a set of r documents, while
the target labels in T are permutations over the r documents). The performance of a prediction
model h : X ?T is measured via a loss function ? : Y ? T ?R+ (where R+ = [0, ?)); here ?(y, t)
denotes the loss incurred on predicting t ? T when the label is y ? Y. Speci?cally, the goal is
to learn a model h with low expected loss or ?-error er?D [h] = E(X,Y )?D [?(Y, h(X))]; ideally, one
?
wants the ?-error of the learned model to be close to the optimal ?-error er?,?
D = inf h:X ?T erD [h].
An algorithm which when given a random training sample as above produces a (random) model
hm : X ?T is said to be consistent w.r.t. ? if the ?-error of the learned model hm converges in
P
1
probability to the optimal: er?D [hm ] ?
? er?,?
D .
Typically, minimizing the discrete ?-error directly is computationally dif?cult; therefore one uses
? + (where R
? + = [0, ?]), de?ned on the continuous
instead a surrogate loss function ? : Y ? Rd ?R
d
surrogate target space R for some d ? Z+ instead of the discrete target space T , and learns a
model f : X ?Rd by minimizing (approximately, based on the training sample) the ?-error er?
D [f ] =
E(X,Y )?D [?(Y, f (X))]. Predictions on new instances x ? X are then made by applying the learned
model f and mapping back to predictions in the target space T via some mapping pred : Rd ?T ,
giving h(x) = pred(f (x)). Under suitable conditions, algorithms that approximately minimize the
?-error based on a training sample are known to be consistent with respect to ?, i.e. to converge in
?
probability to the optimal ?-error er?,?
D = inf f :X ?Rd erD [f ]. A desirable property of ? is that it be
calibrated w.r.t. ?, in which case consistency w.r.t. ? also guarantees consistency w.r.t. ?; we give a
formal de?nition of calibration and statement of this result below.
?
In what follows, we will denote by ?n the probability simplex in Rn : ?n = {p ? Rn+ : i pi = 1}.
For z ? R, let (z)+ = max(z, 0). We will ?nd it convenient to view the loss function ? : Y ?T ?R+
as an n ? k matrix with elements ?yt = ?(y, t) for y ? [n], t ? [k], and column vectors ?t =
?+
(?1t , . . . , ?nt )? ? Rn+ for t ? [k]. We will also represent the surrogate loss ? : Y ? Rd ?R
d
n
d
?
as a vector function ? : R ?R+ with ?y (u) = ?(y, u) for y ? [n], u ? R , and ?(u) =
? n for u ? Rd .
(?1 (u), . . . , ?n (u))? ? R
+
?+
De?nition 1 (Calibration). Let ? : Y ? T ?R+ and let P ? ?n . A surrogate loss ? : Y ? Rd ?R
d
is said to be calibrated w.r.t. ? over P if there exists a function pred : R ?T such that
p? ?(u) > inf p? ?(u) .
?p ? P :
inf
?
u?Rd :pred(u)?argmin
/
t p ?t
1
P
P
u?Rd
Here ?
? denotes convergence in probability: Xm ?
? a if ?? > 0, P(|Xm ? a| ? ?) ? 0 as m ? ?.
2
In this case we also say (?, pred) is (?, P)-calibrated, or if P = ?n , simply ?-calibrated.
? + . Then ? is calibrated w.r.t. ?
Theorem 2 ( [6, 7, 16]). Let ? : Y ? T ?R+ and ? : Y ? Rd ?R
over ?n iff ? a function pred : Rd ?T such that for all distributions D on X ? Y and all sequences
of random (vector) functions fm : X ?Rd (depending on (X1 , Y1 ), . . . , (Xm , Ym )),
P
? er?,?
er?
D [fm ] ?
D
P
er?D [pred ? fm ] ?
? er?,?
D .
implies
For any instance x ? X , let p(x) ? ?n denote the conditional label probability vector at x, given by
p(x) = (p1 (x), . . . , pn (x))? where py (x) = P(Y = y | X = x). Then one can extend the above
result to show that for P ? ?n , ? is calibrated w.r.t. ? over P iff ? a function pred : Rd ?T such
that the above implication holds for all distributions D on X ? Y for which p(x) ? P ?x ? X .
Subset ranking. Subset ranking problems arise frequently in information retrieval applications.
In a subset ranking problem, each instance in X consists of a query together with a set of say
r documents to be ranked. The label space Y varies from problem to problem: in some cases,
labels consist of binary or multi-level relevance judgements for the r documents, in which case
Y = {0, 1}r or Y = {0, 1, . . . , s}r for some appropriate s ? Z+ ; in other cases, labels consist
of pairwise preference graphs over the r documents, represented as (possibly weighted) directed
acyclic graphs (DAGs) over r nodes. Given examples of such instance-label pairs, the goal is to
learn a model to rank documents for new queries/instances; in most cases, the desired ranking takes
the form of a permutation over the r documents, so that T = Sr (where Sr denotes the group of
permutations on r objects). As noted earlier, various loss functions are used in practice, and there
has been much interest in understanding questions of consistency and calibration for these losses in
recent years [9?15, 17]. The focus so far has mostly been on designing r-dimensional surrogates,
which operate on a surrogate target space of dimension d = r; these are also termed ?score-based?
surrogates since the resulting algorithms can be viewed as learning one real-valued score function
for each of the r documents, and in this case the pred mapping usually consists of simply sorting
the documents according to these scores. Below we will apply our result on calibrated surrogates
for low-rank loss matrices to obtain new calibrated surrogates ? both r-dimensional, score-based
surrogates and, in some cases, higher-dimensional surrogates ? for several subset ranking losses.
3
Calibrated Surrogates for Low Rank Loss Matrices
The following is the primary result of our paper. The result gives an explicit construction for a
convex, calibrated, least-squares type surrogate loss de?ned on a low-dimensional surrogate space
for any target loss matrix that has a low-rank structure.
Theorem 3. Let ? : Y ? T ?R+ be a loss function such that there exist d ? Z+ , vectors
?1 , . . . , ?n ? Rd , ? 1 , . . . , ? k ? Rd and c ? R such that
d
?
?(y, t) =
?yi ?ti + c .
i=1
? + be de?ned as
Let ??? : Y ? Rd ?R
??? (y, u) =
d
?
i=1
and let pred?? : Rd ?T be de?ned as
?
?
Then ??? , pred?? is ?-calibrated.
(ui ? ?yi )2
pred?? (u) ? argmint?[k] u? ? t .
Proof. Let p ? ?n . De?ne up ? Rd as up
i =
p? ? ?? (u) =
?n
y=1
d ?
n
?
i=1 y=1
py ?yi ?i ? [d]. Now for any u ? Rd , we have
py (ui ? ?yi )2 .
Minimizing this over u ? R yields that u is the unique minimizer of p? ? ?? (u). Also, for any
t ? [k], we have
d
p
3
?
p ?t =
n
?
py
y=1
Now, for each t ? [k], de?ne
d
??
?yi ?ti + c
i=1
?
= (up )? ? t + c .
?
regret?p (t) = p? ?t ? min
p? ?t? = (up )? ? t ? min
(up )? ? t? .
?
?
t ?[k]
t ?[k]
Clearly, by de?nition of pred?? , we have regret?p (pred?? (up )) = 0. Also, if regret?p (t) = 0 for all
t ? [k], then trivially pred?? (u) ? argmint p? ?t ?u ? Rd (and there is nothing to prove in this case).
Therefore assume ?t ? [k] : regret?p (t) > 0, and let
regret?p (t) .
? =
min
t?[k]:regret?p (t)>0
Then we have
inf
?
u?Rd :pred?
/
t p ?t
? (u)?argmin
p? ? ?? (u)
=
=
inf
u?Rd :regret?p (pred?
? (u))??
p? ? ?? (u)
inf
?
?
p
u?Rd :regret?p (pred?
? (u))?regretp (pred? (u ))+?
p? ? ?? (u) .
Now, we claim that the mapping u ?? regret?p (pred?? (u)) is continuous at u = up . To see this,
suppose the sequence {um } converges to up . Then we have
regret?p (pred?? (um ))
=
(up )? ? pred?? (um ) ? min
(up )? ? t?
?
=
(u ? um ) ? pred?? (um ) + u?
? min
(up )? ? t?
m ? pred?
? (um )
?
=
p
?
p
?
t ?[k]
(u ? um ) ? pred?? (um ) + min
?
t ?[k]
holds by de?nition of pred?? . It is easy to see
to up . Thus regret?p (pred?? (um )) converges to
The last equality
as um converges
continuity at up . In particular, this implies ?? > 0 such that
This gives
u?
m ? t?
t ?[k]
? min
(up )? ? t?
?
t ?[k]
the term on the right goes to zero
regret?p (pred?? (up )) = 0, yielding
?u ? up ? < ? =? regret?p (pred?? (u)) ? regret?p (pred?? (up )) < ? .
inf
?
?
p
u?Rd :regret?p (pred?
? (u))?regretp (pred? (u ))+?
p? ? ?? (u)
?
>
inf
u?Rd :?u?up ???
p? ? ?? (u)
inf p? ? ?? (u) ,
u?Rd
where the last inequality holds since p? ? ?? (u) is a strictly convex function of u and up is its unique
minimizer. The above sequence of inequalities give us that
inf
?
u?Rd :pred?
/
t p ?t
? (u)?argmin
p? ? ?? (u)
>
inf p? ? ?? (u) .
u?Rd
Since this holds for all p ? ?n , we have that (??? , pred?? ) is ?-calibrated.
We note that Ramaswamy and Agarwal [16] showed a similar least-squares type surrogate calibrated
for any loss ? : Y ? T ?R+ ; indeed our proof technique above draws inspiration from the proof
technique there. However, the surrogate they gave was de?ned on a surrogate space of dimension
n ? 1, where n is the number of class labels in Y. For many practical problems, this is an intractably
large number. For example, as noted above, in the subset ranking problems we consider, the number
of class labels is typically exponential in r, the number of documents associated with each query.
On the other hand, as we will see below, many subset ranking losses have a low-rank structure,
with rank linear or quadratic in r, allowing us to use the above result to design convex calibrated
surrogates on an O(r) or O(r2 )-dimensional space. Ramaswamy and Agarwal also gave another
result in which they showed that any loss matrix of rank d has a d-dimensional convex calibrated
surrogate; however the surrogate there was de?ned such that it took values < ? on an awkward
space in Rd (not the full space Rd ) that would be dif?cult to construct in practice, and moreover,
their result did not yield an explicit construction for the pred mapping required to use a calibrated
surrogate in practice. Our result above combines the bene?ts of both these previous results, allowing
explicit construction of low-dimensional least-squares type surrogates for any low-rank loss matrix.
The following sections will illustrate several applications of this result.
4
4
Calibrated Surrogates for Precision@q
The Precision@q is a popular performance measure for subset ranking problems in information
retrieval. As noted above, in a subset ranking problem, each instance in X consists of a query
together with a set of r documents to be ranked. Consider a setting with binary relevance judgement
labels, so that Y = {0, 1}r with n = 2r . The prediction space is T = Sr (group of permutations on
r objects) with k = r!. For y ? {0, 1}r and ? ? Sr , where ?(i) denotes the position of document i
under ?, the Precision@q loss for any integer q ? [r] can be written as follows:
q
?P@q (y, ?)
=
1?
=
1?
1?
y ?1
q i=1 ? (i)
r
1?
yi ? 1(?(i) ? q) .
q i=1
?
? + and pred? :
Therefore, by Theorem 3, for the r-dimensional surrogate ?P@q
: {0, 1}r ? Rr ?R
P@q
r
R ?Sr de?ned as
r
?
?
?P@q
(y, u) =
(ui ? yi )2
i=1
pred?P@q (u)
argmax??Sr
?
r
?
i=1
ui ? 1(?(i) ? q) ,
?
, pred?P@q ) is ?P@q -calibrated. It can easily be seen that for any u ? Rr , any
we have that (?P@q
permutation ? which places the top q documents sorted in decreasing order of scores ui in the top
q positions achieves the maximum in pred?P@q (u); thus pred?P@q (u) can be implemented ef?ciently
using a standard sorting or selection algorithm. Note that the popular winner-take-all (WTA) loss,
which assigns a loss of 0 if the top-ranked item is relevant (i.e. if y??1 (1) = 1) and 1 otherwise,
is simply a special case of the above loss with q = 1; therefore the above construction also yields
a calibrated surrogate for the WTA loss. To our knowledge, this is the ?rst example of convex,
calibrated surrogates for the Precision@q and WTA losses.
5
Calibrated Surrogates for Expected Rank Utility
The expected rank utility (ERU) is a popular subset ranking performance measure used in recommender systems displaying short ranked lists [18]. In this case the labels consist of multi-level
relevance judgements (such as 0 to 5 stars), so that Y = {0, 1, . . . , s}r for some appropriate s ? Z+
with n = (s + 1)r . The prediction space again is T = Sr with k = r!. For y ? {0, 1, . . . , s}r and
? ? Sr , where ?(i) denotes the position of document i under ?, the ERU loss is de?ned as
r
?
1??(i)
max(yi ? v, 0) ? 2 w?1 ,
?ERU (y, ?) = z ?
i=1
where z is a constant to ensure the positivity of the loss, v ? [s] is a constant that indicates a
neutral score, and w ? R is a constant indicating the viewing half-life. Thus, by Theorem 3, for the
?
? + and pred? : Rr ?Sr de?ned as
: {0, 1, . . . , s}r ? Rr ?R
r-dimensional surrogate ?ERU
ERU
r
?
?
?ERU
(y, u) =
(ui ? max(yi ? v, 0))2
i=1
pred?ERU (u)
?
argmax??Sr
r
?
i=1
ui ? 2
1??(i)
w?1
,
?
, pred?ERU ) is ?ERU -calibrated. It can easily be seen that for any u ? Rr , any
we have that (?ERU
permutation ? satisfying the condition
ui > uj
?
predERU (u), and
=? ?(i) < ?(j)
therefore pred?ERU (u) can be implemented ef?ciently
achieves the maximum in
by simply sorting the r documents in decreasing order of scores ui . As for Precision@q, to our
knowledge, this is the ?rst example of a convex, calibrated surrogate for the ERU loss.
5
6
Calibrated Surrogates for Mean Average Precision
The mean average precision (MAP) is a widely used ranking performance measure in information
retrieval and related applications [15, 19]. As with the Precision@q loss, Y = {0, 1}r and T = Sr .
For y ? {0, 1}r and ? ? Sr , where ?(i) denotes the position of document i under ?, the MAP loss
is de?ned as follows:
?(i)
? 1 ?
1
?MAP (y, ?) = 1 ?
y ?1 .
|{? : y? = 1}| i:y =1 ?(i) j=1 ? (j)
i
It was recently shown that there cannot exist any r-dimensional convex, calibrated surrogates for
the MAP loss [15]. We now re-write the MAP loss above in a manner that allows us to show the
existence of an O(r2 )-dimensional convex, calibrated surrogate. In particular, we can write
r ?
r ?
i
i
?
?
y??1 (i) y??1 (j)
1
yi yj
1
. = 1 ? ?r
?MAP (y, ?) = 1 ? ?r
i
?=1 y? i=1 j=1
?=1 y? i=1 j=1 max(?(i), ?(j))
?
? + and
-dimensional surrogate ?MAP
: {0, 1}r ? Rr(r+1)/2 ?R
Thus, by Theorem 3, for the r(r+1)
2
?
r(r+1)/2
?Sr de?ned as
predMAP : R
r ?
i ?
?
y i y j ?2
?
(y, u) =
?MAP
uij ? ?r
?=1 y?
i=1 j=1
pred?MAP (u)
we have that
?
(?MAP
, pred?MAP )
?
argmax??Sr
r ?
i
?
i=1 j=1
is ?MAP -calibrated.
uij ?
1
,
max(?(i), ?(j))
Note however that the optimization problem associated with computing pred?MAP (u) above can be
written as a quadratic assignment problem (QAP), and most QAPs are known to be NP-hard. We
conjecture that the QAP associated with the mapping pred?MAP above is also NP-hard. Therefore,
?
while the surrogate loss ?MAP
is calibrated for ?MAP and can be minimized ef?ciently over a training
sample to learn a model f : X ?Rr(r+1)/2 , for large r, evaluating the mapping required to transform
predictions in Rr(r+1)/2 back to predictions in Sr is likely to be computationally infeasible. Below
we describe an alternate mapping in place of pred?MAP which can be computed ef?ciently, and show
?
that under certain conditions on the probability distribution, the surrogate ?MAP
together with this
mapping is still calibrated for ?MAP .
Speci?cally, de?ne predMAP : Rr(r+1)/2 ?Sr as follows:
?
?
predMAP (u) ? ? ? Sr : uii > ujj =? ?(i) < ?(j) .
Clearly, predMAP (u) can be implemented ef?ciently by simply sorting the ?diagonal? elements uii
for i ? [r]. Also, let ?Y denote the probability simplex over Y, and for each p ? ?Y , de?ne
up ? Rr(r+1)/2 as follows:
?
?
? ? yi y j ?
Yi Yj
?
=
E
py ? r
up
=
?i, j ? [r] : i ? j .
Y
?p
r
ij
?=1 Y?
?=1 y?
y?Y
Now de?ne Preinforce ? ?Y as follows:
?
p
p
p
Preinforce = p ? ?Y : up
ii ? ujj =? uii ? ujj +
?
??[r]\{i,j}
p
where we set up
ij = uji for i < j. Then we have the following result:
?
p
(up
?
u
)
,
+
j?
i?
?
Theorem 4. (?MAP
, predMAP ) is (?MAP , Preinforce )-calibrated.
The ideal predictor pred?MAP uses the entire u matrix, but the predictor predMAP , uses only the diagonal elements. The noise conditions Preinforce can be viewed as basically enforcing that the diagonal
elements dominate and enforce a clear ordering themselves.
In fact, since the mapping predMAP depends on only the diagonal elements of u, we can equivalently
de?ne an r-dimensional surrogate that is calibrated w.r.t. ?MAP over Preinforce . Speci?cally, we have
the following immediate corollary:
6
r
? + and pred
?
Corollary 5. Let ??MAP : {0, 1}r ? Rr ?R
MAP : R ?Sr be de?ned as
r ?
?
?
2
yi
?) =
u
?i ? ?r
??MAP (y, u
?=1 y?
i=1
?
?
?
pred
u) ?
? ? Sr : u
?i > u
?j =? ?(i) < ?(j) .
MAP (?
?
Then (??MAP , pred
MAP ) is (?MAP , Preinforce )-calibrated.
r
?
?
Looking at the
?rform of ?MAP and predMAP , we can see that the function s : Y?R de?ned as
si (y) = yi /( ?=1 yr ) is a ?standardization function? for the MAP loss over Preinforce , and therefore
it follows that any ?order-preserving surrogate? with this standardization function is also calibrated
with the MAP loss over Preinforce [13]. To our knowledge, this is the ?rst example of conditions on
the probability distribution under which a convex calibrated (and moreover, score-based) surrogate
can be designed for the MAP loss.
7
Calibrated Surrogates for Pairwise Disagreement
The pairwise disagreement (PD) loss is a natural and widely used loss in subset ranking [11, 17].
The label space Y here consists of a ?nite number of (possibly weighted) directed acyclic graphs
r(r?1)
(DAGs) over r nodes; we can represent each such label as a vector y ? R+
where at least one
of yij or yji is 0 for each i ?= j, with yij > 0 indicating a preference for document i over document
j and yij denoting the weight of the preference. The prediction space as usual is T = Sr with
k = r!. For y ? Y and ? ? Sr , where ?(i) denotes the position of document i under ?, the PD loss
is de?ned as follows:
r ?
?
?
?
yij 1 ?(i) > ?(j) .
?PD (y, ?) =
i=1 j?=i
It was recently shown that there cannot exist any r-dimensional convex, calibrated surrogates for the
?
?+
PD loss [15, 16]. By Theorem 3, for the r(r ? 1)-dimensional surrogate ?PD
: Y ? Rr(r?1) ?R
?
and predPD : Rr(r?1) ?Sr de?ned as
r ?
?
?
?PD
(y, u) =
(uij ? yij )2
(1)
i=1 j?=i
pred?PD (u)
?
argmin??Sr
r ?
?
i=1 j?=i
?
?
uij ? 1 ?(i) > ?(j)
?
we immediately have that (?PD
, pred?PD ) is ?PD -calibrated (in fact the loss matrix ?PD has rank at most
r(r?1)
r(r?1)
, allowing for an 2 -dimensional surrogate; we use r(r ?1) dimensions for convenience).
2
In this case, the optimization problem associated with computing pred?PD (u) above is a minimum
weighted feedback arc set (MWFAS) problem, which is known to be NP-hard. Therefore, as with the
?
MAP loss, while the surrogate loss ?PD
is calibrated for ?PD and can be minimized ef?ciently over
a training sample to learn a model f : X ?Rr(r?1) , for large r, evaluating the mapping required to
transform predictions in Rr(r?1) back to predictions in Sr is likely to be computationally infeasible.
Below we give two sets of results. In Section 7.1, we give a family of score-based (r-dimensional)
surrogates that are calibrated with the PD loss under different conditions on the probability distribution; these surrogates and conditions generalize those of Duchi et al. [11]. In Section 7.2, we
give a different condition on the probability distribution under which we can actually avoid ?dif?cult? graphs being passed to pred?PD . This condition is more general (i.e. encompasses a larger set
of probability distributions) than those associated with the score-based surrogates; this gives a new
(non-score-based, r(r ?1)-dimensional) surrogate with an ef?ciently computable pred mapping that
is calibrated with the PD loss over a larger set of probability distributions than previous surrogates
for this loss.
7.1
Family of r-Dimensional Surrogates Calibrated with ?PD Under Noise Conditions
The following gives a family of score-based surrogates, parameterized by functions f : Y?Rr , that
are calibrated with the PD loss under different conditions on the probability distribution:
7
Theorem 6. Let f : Y?Rr be any function that maps DAGs y ? Y to score vectors f (y) ? Rr . Let
? + , pred : Rr ?Sr and Pf ? ?Y be de?ned as
? f : Y ? R r ?R
r
?
?2
?
?f (y, u) =
ui ? fi (y)
?
?
i=1
pred(u) ? ? ? Sr : ui > uj =? ?(i) < ?(j)
?
?
Pf = p ? ?Y : EY ?p [Yij ] > EY ?p [Yji ] =? EY ?p [fi (Y )] > EY ?p [fj (Y )] .
Then (?f , pred) is (?PD , Pf )-calibrated.
The noise conditions Pf state that the expected value of function f must decide the ?right? ordering.
We note that the surrogate given by Duchi et al. [11] can be written in our notation as
r ?
r
?
?
?DMJ (y, u) =
yij (uj ? ui ) + ?
?(ui ) ,
i=1 j?=i
i=1
where ? is a strictly convex and 1-coercive function and ? > 0. Taking ?(z) = z 2 and ? = 12 gives
a special case of the family of score-based surrogates
? in Theorem 6 above obtained by taking f as
(yij ? yji ) .
fi (y) =
j?=i
Indeed, the set of noise conditions under which the surrogate ?DMJ is shown to be calibrated with
the PD loss in Duchi et al. [11] is exactly the set Pf above with this choice of f . We also note that f
can be viewed as a ?standardization function? [13] for the PD loss over Pf .
7.2
An O(r2 )-dimensional Surrogate Calibrated with ?PD Under More General Conditions
?
: Y ? Rr(r?1) de?ned in Eq. (1). We noted
Consider now the r(r ? 1)-dimensional surrogate ?PD
?
the corresponding mapping predPD involved an NP-hard optimization problem. Here we give an
alternate mapping predPD : Rr(r?1) ?Sr that can be computed ef?ciently, and show that under
?
certain conditions on the probability distribution , the surrogate ?PD
together with this mapping
predPD is calibrated for ?PD . The mapping predPD is described by Algorithm 1 below:
Algorithm 1 predPD (Input: u ? Rr(r?1) ; Output: Permutation ? ? Sr )
Construct a directed graph over [r] with edge (i, j) having weight (uij ? uji )+ . If this graph is
acyclic, return any topological sorted order. If the graph has cycles, sort the edges in ascending
order by weight and delete them one by one (smallest weight ?rst) until the graph becomes acyclic;
return any topological sorted order of the resulting acyclic graph.
For each p ? ?Y , de?ne E p = {(i, j) ? [r] ? [r] : EY ?p [Yij ] > EY ?p [Yji ]}, and de?ne
?
?
?
?
PDAG = p ? ?Y : [r], E p is a DAG .
Then we have the following result:
?
, predPD ) is (?PD , PDAG )-calibrated.
Theorem 7. (?PD
It is easy to see that PDAG ? Pf ?f (where Pf is as de?ned in Theorem 6), so that the above result
yields a low-dimensional, convex surrogate with an ef?ciently computable pred mapping that is
calibrated for the PD loss under a broader set of conditions than the previous surrogates.
8
Conclusion
Calibration of surrogate losses is an important property in designing consistent learning algorithms.
We have given an explicit method for constructing calibrated surrogates for any learning problem
with a low-rank loss structure, and have used this to obtain several new results for subset ranking,
including new calibrated surrogates for the Precision@q, ERU, MAP and PD losses.
Acknowledgments
The authors thank the anonymous reviewers, Aadirupa Saha and Shiv Ganesh for their comments. HGR acknowledges a Tata Consultancy Services (TCS) PhD fellowship and the Indo-US Virtual Institute for Mathematical and Statistical Sciences (VIMSS). SA thanks the Department of Science & Technology (DST) and
Indo-US Science & Technology Forum (IUSSTF) for their support. AT gratefully acknowledges the support of
NSF under grant IIS-1319810.
8
References
[1] G?abor Lugosi and Nicolas Vayatis. On the Bayes-risk consistency of regularized boosting
methods. Annals of Statistics, 32(1):30?55, 2004.
[2] Wenxin Jiang. Process consistency for AdaBoost. Annals of Statistics, 32(1):13?29, 2004.
[3] Tong Zhang. Statistical behavior and consistency of classi?cation methods based on convex
risk minimization. Annals of Statistics, 32(1):56?134, 2004.
[4] Ingo Steinwart. Consistency of support vector machines and other regularized kernel classi?ers. IEEE Transactions on Information Theory, 51(1):128?142, 2005.
[5] Peter L. Bartlett, Michael Jordan, and Jon McAuliffe. Convexity, classi?cation and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[6] Tong Zhang. Statistical analysis of some multi-category large margin classi?cation methods.
Journal of Machine Learning Research, 5:1225?1251, 2004.
[7] Ambuj Tewari and Peter L. Bartlett. On the consistency of multiclass classi?cation methods.
Journal of Machine Learning Research, 8:1007?1025, 2007.
[8] Ingo Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26:225?287, 2007.
[9] David Cossock and Tong Zhang. Statistical analysis of bayes optimal subset ranking. IEEE
Transactions on Information Theory, 54(11):5140?5154, 2008.
[10] Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. Listwise approach to learning
to rank: Theory and algorithm. In International Conference on Machine Learning, 2008.
[11] John Duchi, Lester Mackey, and Michael Jordan. On the consistency of ranking algorithms. In
International Conference on Machine Learning, 2010.
[12] Pradeep Ravikumar, Ambuj Tewari, and Eunho Yang. On NDCG consistency of listwise ranking methods. In International Conference on Arti?cial Intelligence and Statistics, 2011.
[13] David Buffoni, Cl?ement Calauz`enes, Patrick Gallinari, and Nicolas Usunier. Learning scoring
functions with order-preserving losses and standardized supervision. In International Conference on Machine Learning, 2011.
[14] Wei Gao and Zhi-Hua Zhou. On the consistency of multi-label learning. In Conference on
Learning Theory, 2011.
[15] Cl?ement Calauz`enes, Nicolas Usunier, and Patrick Gallinari. On the (non-)existence of convex,
calibrated surrogate losses for ranking. In Advances in Neural Information Processing Systems
25, pages 197?205. 2012.
[16] Harish G. Ramaswamy and Shivani Agarwal. Classi?cation calibration dimension for general
multiclass losses. In Advances in Neural Information Processing Systems 25, pages 2087?
2095. 2012.
[17] Yanyan Lan, Jiafeng Guo, Xueqi Cheng, and Tie-Yan Liu. Statistical consistency of ranking
methods in a rank-differentiable probability space. In Advances in Neural Information Processing Systems 25, pages 1241?1249. 2012.
[18] Quoc V. Le and Alex Smola. Direct optimization of ranking measures, arXiv:0704.3359, 2007.
[19] Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims. A support vector method
for optimizing average precision. In Proceedings of the 30th ACM SIGIR International Conference on Research and Development in Information Retrieval, 2007.
9
| 4906 |@word wenxin:1 judgement:3 nd:1 arti:1 liu:2 score:21 denoting:1 document:20 nt:1 si:1 written:3 must:1 john:1 cant:1 designed:3 mackey:1 half:1 intelligence:1 yr:1 item:1 cult:3 reciprocal:1 short:1 boosting:1 node:2 preference:4 zhang:4 mathematical:1 direct:1 consists:4 prove:1 combine:1 manner:1 pairwise:5 indeed:2 expected:6 behavior:1 p1:1 frequently:1 themselves:1 multi:5 inspired:1 discounted:1 decreasing:2 zhi:1 pf:8 becomes:1 iisc:2 moreover:3 notation:1 what:1 argmin:4 coercive:1 guarantee:1 tewaria:1 cial:1 ti:2 tie:2 exactly:1 um:10 lester:1 gallinari:2 grant:1 mcauliffe:1 service:1 jiang:1 approximately:2 lugosi:1 ndcg:1 studied:1 dif:3 directed:3 unique:2 practical:1 acknowledgment:1 yj:2 ement:2 practice:4 regret:15 dmj:2 nite:5 yan:2 convenient:1 cannot:2 close:1 selection:1 convenience:1 risk:4 applying:1 py:5 map:43 reviewer:1 yt:1 straightforward:1 attention:1 go:1 convex:28 sigir:1 assigns:1 immediately:1 dominate:1 wensheng:1 annals:3 target:14 construction:7 suppose:1 us:3 designing:2 element:5 satisfying:1 particularly:1 wang:1 cycle:1 ordering:2 pd:37 convexity:1 ui:13 ideally:1 solving:1 regretp:2 easily:2 various:2 represented:1 describe:1 query:5 whose:1 emerged:1 widely:3 larger:3 valued:1 say:2 otherwise:1 statistic:5 transform:4 sequence:3 rr:22 differentiable:1 took:1 relevant:1 iff:2 shiv:1 rst:6 convergence:1 produce:1 converges:4 object:2 depending:1 develop:1 illustrate:1 measured:1 ij:2 received:1 sa:1 eq:1 implemented:3 signi:1 involves:1 implies:2 jiafeng:1 viewing:1 virtual:1 preliminary:2 anonymous:1 yij:9 strictly:2 hold:4 mapping:21 claim:1 achieves:2 smallest:1 label:18 weighted:3 minimization:2 clearly:2 always:1 pn:1 avoid:1 zhou:1 broader:1 corollary:2 focus:1 joachim:1 rank:24 indicates:1 dcg:1 typically:3 entire:1 abor:1 uij:5 interested:1 development:1 special:3 ernet:2 aware:1 construct:2 having:1 jon:1 minimized:3 others:1 simplex:2 np:4 saha:1 argmax:3 interest:3 pradeep:1 yielding:1 implication:1 edge:2 desired:2 re:1 delete:1 instance:7 column:1 earlier:1 assignment:1 subset:18 neutral:1 predictor:2 varies:1 eec:1 calibrated:65 thanks:1 international:5 michael:2 ym:2 together:4 again:1 yisong:1 possibly:2 positivity:1 american:1 usable:1 return:2 li:1 de:33 star:1 automation:2 includes:1 ranking:25 depends:1 view:1 ramaswamy:6 start:1 sort:1 bayes:2 minimize:2 square:5 yield:5 generalize:1 basically:1 pdag:3 cation:8 involved:2 thereof:1 associated:6 proof:4 gain:1 calauz:2 popular:3 knowledge:4 actually:1 back:5 higher:1 aadirupa:1 adaboost:1 awkward:1 wei:1 erd:2 smola:1 until:1 working:1 hand:1 steinwart:2 ganesh:1 lack:1 continuity:1 ujj:3 concept:1 equality:1 inspiration:1 noted:4 outline:1 duchi:6 fj:1 ef:12 recently:4 fi:3 winner:2 cossock:1 extend:1 association:1 theirs:1 dag:4 rd:30 consistency:16 trivially:1 gratefully:1 calibration:6 entail:1 supervision:1 patrick:2 recent:3 mrr:1 showed:3 optimizing:1 inf:12 termed:1 certain:5 inequality:2 binary:3 life:1 yi:14 fen:1 scoring:1 preserving:2 seen:2 nition:4 minimum:1 ey:6 speci:4 converge:1 ii:2 full:1 desirable:1 reduces:1 retrieval:4 ravikumar:1 prediction:15 arxiv:1 kernel:1 represent:2 agarwal:6 buffoni:1 vayatis:1 background:2 want:1 fellowship:1 operate:2 sr:28 comment:1 yue:1 jordan:2 integer:1 ciently:10 yanyan:1 yang:1 ideal:1 hgr:1 easy:2 variety:2 gave:3 fm:3 multiclass:4 computable:2 utility:4 bartlett:2 passed:1 qap:2 peter:2 tewari:3 clear:1 shivani:3 category:1 exist:3 nsf:1 discrete:2 write:2 group:2 lan:1 drawn:1 graph:10 year:3 parameterized:1 dst:1 place:2 family:5 decide:1 draw:1 appendix:1 bound:1 jue:1 cheng:1 quadratic:2 topological:2 uii:3 alex:1 min:7 xueqi:1 conjecture:1 ned:21 department:1 according:1 alternate:2 wta:4 quoc:1 ene:2 thorsten:1 computationally:4 previously:1 turn:1 needed:2 ascending:1 umich:1 usunier:2 apply:2 appropriate:2 disagreement:4 enforce:1 existence:4 original:2 thomas:1 denotes:7 harish:3 top:3 ensure:1 standardized:1 cally:4 giving:1 uj:3 forum:1 question:3 primary:2 usual:1 diagonal:4 surrogate:95 said:2 thank:1 tata:1 enforcing:1 eunho:1 minimizing:4 equivalently:1 setup:1 mostly:1 statement:1 design:3 allowing:3 recommender:1 arc:1 ingo:2 t:1 immediate:1 looking:1 y1:2 rn:3 pred:62 david:2 namely:1 pair:1 required:3 bene:1 learned:3 below:6 usually:1 xm:4 encompasses:1 ambuj:3 including:4 max:5 suitable:1 ranked:5 natural:1 regularized:2 predicting:1 technology:2 ne:8 acknowledges:2 hm:3 finley:1 text:1 understanding:2 literature:1 loss:86 permutation:7 acyclic:5 incurred:1 consistent:3 standardization:3 displaying:1 pi:1 last:2 intractably:1 infeasible:2 formal:1 institute:3 taking:2 listwise:2 feedback:1 dimension:6 xia:1 evaluating:2 cumulative:1 unaware:1 author:1 made:1 far:1 transaction:2 hang:1 argmint:2 filip:1 yji:4 continuous:2 uji:2 eru:15 learn:5 nicolas:3 csa:2 cl:2 constructing:1 did:2 main:1 noise:6 arise:1 nothing:1 x1:2 cient:2 tong:3 precision:15 position:5 explicit:9 exponential:1 indo:2 vim:1 learns:1 unwieldy:1 theorem:11 er:11 r2:5 list:1 exists:1 consist:3 phd:1 margin:1 sorting:4 generalizing:1 michigan:1 tc:1 simply:5 likely:2 gao:1 hua:1 minimizer:2 acm:1 conditional:1 goal:3 viewed:3 sorted:3 hard:5 included:1 operates:1 classi:9 indicating:2 support:4 guo:1 radlinski:1 relevance:4 indian:2 constructive:1 |
4,317 | 4,907 | On the Relationship Between Binary Classification,
Bipartite Ranking, and Binary Class Probability
Estimation
Harikrishna Narasimhan Shivani Agarwal
Department of Computer Science and Automation
Indian Institute of Science, Bangalore 560012, India
{harikrishna,shivani}@csa.iisc.ernet.in
Abstract
We investigate the relationship between three fundamental problems in machine
learning: binary classification, bipartite ranking, and binary class probability estimation (CPE). It is known that a good binary CPE model can be used to obtain a
good binary classification model (by thresholding at 0.5), and also to obtain a good
bipartite ranking model (by using the CPE model directly as a ranking model); it
is also known that a binary classification model does not necessarily yield a CPE
model. However, not much is known about other directions. Formally, these relationships involve regret transfer bounds. In this paper, we introduce the notion of
weak regret transfer bounds, where the mapping needed to transform a model from
one problem to another depends on the underlying probability distribution (and in
practice, must be estimated from data). We then show that, in this weaker sense, a
good bipartite ranking model can be used to construct a good classification model
(by thresholding at a suitable point), and more surprisingly, also to construct a
good binary CPE model (by calibrating the scores of the ranking model).
1
Introduction
Learning problems with binary labels, where one is given training examples consisting of objects
with binary labels (such as emails labeled spam/non-spam or documents labeled relevant/irrelevant),
are widespread in machine learning. These include for example the three fundamental problems of
binary classification, where the goal is to learn a classification model which, when given a new
object, can predict its label; bipartite ranking, where the goal is to learn a ranking model that can
rank new objects such that those in one category are ranked higher than those in the other; and
binary class probability estimation (CPE), where the goal is to learn a CPE model which, when
given a new object, can estimate the probability of its belonging to each of the two classes. Of
these, binary classification is classical, although several fundamental questions related to binary
classification have been understood only relatively recently [1?4]; bipartite ranking is more recent
and has received much attention in recent years [5?8], and binary CPE, while a classical problem,
also continues to be actively investigated [9,10]. All three problems abound in applications, ranging
from email classification to document retrieval and computer vision to medical diagnosis.
It is well known that a good binary CPE model can be used to obtain a good binary classification
model (in a formal sense that we will detail below; specifically, in terms of regret transfer bounds)
[4, 11]; more recently, it was shown that a good binary CPE model can also be used to obtain a
good bipartite ranking model (again, in terms of regret transfer bounds, to be detailed below) [12].
It is also known that a binary classification model cannot necessarily be converted to a CPE model.1
However, beyond this, not much is understood about the exact relationship between these problems.2
1
Note that we start from a single classification model, which rules out the probing reduction of [13].
There are some results suggesting equivalence between specific boosting-style classification and ranking
algorithms [14, 15], but this does not say anything about relationships between the problems per se.
2
1
Figure 1: (a) Current state of knowledge; (b) State of knowledge after the results of this paper. Here
?S? denotes a ?strong? regret transfer relationship; ?W? denotes a ?weak? regret transfer relationship.
In this paper, we introduce the notion of weak regret transfer bounds, where the mapping needed to
transform a model from one problem to another depends on the underlying probability distribution
(and in practice, must be estimated from data). We then show such weak regret transfer bounds
(under mild technical conditions) from bipartite ranking to binary classification, and from bipartite
ranking to binary CPE. Specifically, we show that, given a good bipartite ranking model and access
to either the distribution or a sample from it, one can estimate a suitable threshold and convert the
ranking model into a good binary classification model; similarly, given a good bipartite ranking
model and access to the distribution or a sample, one can ?calibrate? the ranking model to construct
a good binary CPE model. Though weak, the regret bounds are non-trivial in the sense that the
sample size required for constructing a good classification or CPE model from an existing ranking
model is smaller than what might be required to learn such models from scratch.
The main idea in transforming a ranking model to a classifier is to find a threshold that minimizes the
expected classification error on the distribution, or the empirical classification error on the sample.
We derive these results for cost-sensitive classification with any cost parameter c. The main idea
in transforming a ranking model to a CPE model is to find a monotonically increasing function
from R to [0, 1] which, when applied to the ranking model, minimizes the expected CPE error on
the distribution, or the empirical CPE error on the sample; this is similar to the idea of isotonic
regression [16?19]. The proof here makes use of a recent result of [20] which relates the squared
error of a calibrated CPE model to classification errors over uniformly drawn costs, and a result on
the Rademacher complexity of a class of bounded monotonically increasing functions on R [21]. As
a by-product of our analysis, we also obtain a weak regret transfer bound from bipartite ranking to
problems involving the area under the cost curve [22] as a performance measure.
The relationships between the three problems ? both those previously known and those established
in this paper ? are summarized in Figure 1. As noted above, in a weak regret transfer relationship,
given a model for one type of problem, one needs access to a data sample in order to transform this
to a model for another problem. This is in contrast to the previous ?strong? relationships, where a
binary CPE model can simply be thresholded at 0.5 (or cost c) to yield a classification model, or can
simply be used directly as a ranking model. Nevertheless, even with the weak relationships, one still
gets that a statistically consistent algorithm for bipartite ranking can be converted into a statistically
consistent algorithm for binary classification or for binary CPE. Moreover, as we demonstrate in our
experiments, if one has access to a good ranking model and only a small additional sample, then
one is better off using this sample to transform the ranking model into a classification or CPE model
rather than using the limited sample to learn a classification or CPE model from scratch.
The paper is structured as follows. We start with some preliminaries and background in Section 2.
Sections 3 and 4 give our main results, namely weak regret transfer bounds from bipartite ranking
to binary classification, and from bipartite ranking to binary CPE, respectively. Section 5 gives
experimental results on both synthetic and real data. All proofs are included in the appendix.
2
Preliminaries and Background
Let X be an instance space and let D be a probability distribution on X ? {?1}. For (x, y) ? D,
we denote ?(x) = P(y = 1 | x) and p = P(y = 1). In the settings we are interested in, given a
training sample S = ((x1 , y1 ), . . . , (xn , yn )) ? (X ? {?1})n with examples drawn iid from D,
the goal is to learn a binary classification model, a bipartite ranking model, or a binary CPE model.
In what follows, for u ? [??, ?], we will denote sign(u) = 1 if u > 0 and ?1 otherwise, and
sign(u) = 1 if u ? 0 and ?1 otherwise.
2
(Cost-Sensitive) Binary Classification. Here the goal is to learn a model h : X ? {?1}. Typically,
one is interested in models h with small expected 0-1 classification error:
er0-1
D [h] = E(x,y)?D 1(h(x) 6= y) ,
where 1(?) is 1 if its argument is true and 0 otherwise; this is simply the probability that h misclassifies an instance drawn randomly from D. The optimal 0-1 error (Bayes error)
is
0-1
er0-1,?
=
inf
er
[h]
=
E
min
?(x),
1
?
?(x)
;
x
D
D
h:X?{?1}
this is achieved by the Bayes classifier h? (x) = sign(?(x) ? 12 ). The 0-1 classification regret of
0-1,?
0-1
a classifier h is then regret0-1
D [h] = erD [h] ? erD . More generally, in a cost-sensitive binary
classification problem with cost parameter c ? (0, 1), where the cost of a false positive is c and that
of a false negative is (1 ? c), oneis interested in models h with
small cost-sensitive 0-1 error:
er0-1,c
D [h] = E(x,y)?D (1 ? c)1 y = 1, h(x) = ?1 + c 1 y = ?1, h(x) = 1 .
0-1, 1
Note that for c = 21 , we get erD 2 [h] = 12 er0-1
D [h]. The optimal cost-sensitive 0-1 error for cost
parameter c can then be seen to be
er0-1,c,?
=
inf
er0-1,c
D
D [h] = Ex min (1 ? c)?(x), c(1 ? ?(x)) ;
h:X?{?1}
this is achieved by the classifier h?c (x) = sign(?(x) ? c). The c-cost-sensitive regret of a classifier
0-1,c,?
0-1,c
.
h is then regret0-1,c
D [h] = erD [h] ? erD
Bipartite Ranking. Here one wants to learn a ranking model f : X ? R that assigns higher scores
to positive instances than to negative ones. Specifically, the goal is to learn a ranking function f
with small bipartite ranking
error:
h
i
rank
erD [f ] = E 1 (y ? y 0 )(f (x) ? f (x0 )) < 0 + 21 1 f (x) = f (x0 ) y 6= y 0 ,
where (x, y), (x0 , y 0 ) are assumed to be drawn iid from D; this is the probability that a randomly
drawn pair of instances with different labels is mis-ranked by f , with ties broken uniformly at
random. It is known that the ranking error of f is equivalent to one minus the area under the ROC
curve (AUC) of f [5?7]. The optimal ranking error
h can be seen to be
i
1
rank,?
rank
= inf erD [f ] =
erD
Ex,x0 min ?(x)(1 ? ?(x0 )), ?(x0 )(1 ? ?(x)) ;
f :X?R
2p(1 ? p)
this is achieved by any function f ? that is a strictly monotonically increasing transformation of ?.
rank,?
rank
The ranking regret of a ranking function f is given by regretrank
.
D [f ] = erD [f ] ? erD
Binary Class Probability Estimation (CPE). The goal here is to learn a class probability estimator
or CPE model ?b : X ? [0, 1] with small squared error
converted to {0, 1}):
i
h (relative tolabels
sq
y+1 2
.
erD [ ?b ] = E(x,y)?D ?b(x) ? 2
The optimal squared
error can be seen sq
to be
sq,?
erD
=
inf erD [ ?b ] = ersq
D [ ? ] = Ex ?(x)(1 ? ?(x)) .
?
b:X?[0,1]
The squared-error regret of a CPE model ?b can be seen to be
2
regretsq
b ] = ersq
b ] ? ersq,?
= Ex ?b(x) ? ?(x) .
D[ ?
D[ ?
D
Regret Transfer Bounds. The following (strong) regret transfer results from binary CPE to binary
classification and from binary CPE to bipartite ranking are known:
Theorem 1 ( [4, 11]). Let ?b : X ? [0, 1]. Let c ? (0, 1). Then the classifier h(x) = sign ?b(x) ? c)
obtained by thresholding ?b at c satisfies
q
regretsq
b] .
regret0-1,c
sign
?
(b
?
?
c)
?
E
|b
?
(x)
?
?(x)|
?
x
D[ ?
D
Theorem 2 ( [12]). Let ?b : X ? [0, 1]. Then using ?b as a ranking modelqyields
1
1
regretrank
b] ?
Ex |b
? (x) ? ?(x)| ?
regretsq
b] .
D [?
D[ ?
p(1 ? p)
p(1 ? p)
Note that as a consequence of these results, one gets that any learning algorithm that is statistically
consistent for binary CPE, i.e. whose squared-error regret converges in probability to zero as the
training sample size n ? ?, can easily be converted into an algorithm that is statistically consistent
for binary classification (with any cost parameter c, by thresholding the CPE models learned by the
algorithm at c), or into an algorithm that is statistically consistent for bipartite ranking (by using the
learned CPE models directly for ranking).
3
3
Regret Transfer Bounds from Bipartite Ranking to Binary Classification
In this section, we derive weak regret transfer bounds from bipartite ranking to binary classification.
We derive two bounds. The first holds in an idealized setting where one is given a ranking model f
as well as access to the distribution D for finding a suitable threshold to construct the classifier. The
second bound holds in a setting where one is given a ranking model f and a data sample S drawn
iid from D for finding a suitable threshold; this bound holds with high probability over the draw of
S. Our results will require the following assumption on the distribution D and ranking model f :
Assumption A. Let D be a probability distribution on X ? {?1} with marginal distribution ? on
X. Let f : X?R be a ranking model, and let ?f denote the induced distribution of scores f (x) ? R
when x ? ?. We say (D, f ) satisfies Assumption A if ?f is either discrete, continuous, or mixed
with at most finitely many point masses.
We will find
n it convenient to define the following set of all increasing functions from R to {?1}:
o
Tinc =
? : R?{?1} : ?(u) = sign(u ? t) or ?(u) = sign(u ? t) for some t ? [??, ?] .
Definition 3 (Optimal classification transform). For any ranking model f : X ? R, cost parameter c ? (0, 1), and probability distribution D over X ? {?1} such that (D, f ) satisfies Assumption
A, define an optimal classification transform ThreshD,f,c as any increasing function from R to {?1}
such that the classifier h(x) = ThreshD,f,c (f (x)) resulting from composing f with ThreshD,f,c
yields minimum cost-sensitive 0-1 error on D:
??f .
(OP1)
ThreshD,f,c ? argmin??Tinc er0-1,c
D
We note that when f is the class probability function ?, we have ThreshD,?,c (u) = sign(u ? c).
Theorem 4 (Idealized weak regret transfer bound from bipartite ranking to binary classification based on distribution). Let (D, f ) satisfy Assumption A. Let c ? (0, 1). Then the classifier
h(x) = ThreshD,f,c (f (x)) satisfies
q
regret0-1,c
Thresh
?
f
?
2p(1 ? p) regretrank
D,f,c
D
D [f ] .
In practice, one does not have access to the distribution D, and the optimal threshold must be estimated from a data sample. To this end, we define the following:
Definition 5 (Optimal sample-based threshold). For any ranking model f : X ? R, cost paramn
b
eter c ? (0, 1), and sample S ? ??
n=1 (X ?{?1}) , define an optimal sample-based threshold tS,f,c
as any threshold on f such that the resulting classifier h(x) = sign(f (x) ? b
tS,f,c ) yields minimum
cost-sensitive 0-1 error on S:
b
sign ? f ? t
,
(OP2)
tS,f,c ? argmint?R er0-1,c
S
[h]
denotes
the
c-cost-sensitive
0-1
error
of
a
classifier
h
on
the
empirical
distribution
where er0-1,c
S
associated with S (i.e. the uniform distribution over examples in S).
Note that given a ranking function f , cost parameter c, and a sample S of size n, the optimal samplebased threshold b
tS,f,c can be computed in O(n ln n) time by sorting the examples (xi , yi ) in S based
on the scores f (xi ) and evaluating at most n + 1 distinct thresholds lying between adjacent score
values (and above/below all score values) in this sorted order.
Theorem 6 (Sample-based weak regret transfer bound from bipartite ranking to binary classification). Let D be any probability distribution on X ? {?1} and f : X ? R be any fixed
ranking model such that (D, f ) satisfies Assumption A. Let S ? (X ? {?1})n be drawn randomly
according to Dn . Let c ? (0, 1). Let 0 < ? ? 1. Then with probability at least 1 ? ? (over the draw
of S ? Dn ), the classifier h(x) = sign(f (x) ? b
tS,f,c ) obtained by thresholding f at b
tS,f,c satisfies
s
q
32 2 ln(2n) + 1 + ln 4?
0-1,c
rank
regretD sign ? (f ? b
tS,f,c ) ?
2p(1 ? p) regretD [f ] +
.
n
The proof of Theorem 6 involves an application of the result in Theorem 4 together with a standard
VC-dimension based uniform convergence result; specifically, the proof makes use of the fact that
selecting the sample-based threshold in (OP2) is equivalent to empirical risk minimization over Tinc .
Note in particular that the above regret transfer bound, though ?weak?, is non-trivial in that it suggests
a good classifier can be constructed from a good ranking model using far fewer examples than might
be required for learning a classifier from scratch based on standard VC-dimension bounds.
4
Remark 7. We note that, as a consequence of Theorem 6, one can use any learning algorithm that
is statistically consistent for bipartite ranking to construct an algorithm that is consistent for (costsensitive) binary classification as follows: divide the training data into two (say equal) parts, use
one part for learning a ranking model using the consistent ranking algorithm, and the other part for
selecting a threshold on the learned ranking model; both terms in Theorem 6 will then go to zero as
the training sample size increases, yielding consistency for (cost-sensitive) binary classification.
Remark 8. Another implication of the above result is a justification for the use of the AUC as
a surrogate performance measure when learning in cost-sensitive classification settings where the
misclassification costs are unknown during training time [23]. Here, instead of learning a classifier
that minimizes the cost-sensitive classification error for a fixed cost parameter that may turn out to
be incorrect, one can learn a ranking function with good ranking performance (in terms of AUC),
and then later use a small additional sample to select a suitable threshold once the misclassification
costs are known; the above result provides guarantees on the resulting classification performance in
terms of the ranking (AUC) performance of the learned model.
4
Regret Transfer Bounds from Bipartite Ranking to Binary CPE
We now derive weak regret transfer bounds from bipartite ranking to binary CPE. Again, we derive
two bounds: the first holds in an idealized setting where one is given a ranking model f as well as
access to the distribution D for finding a suitable conversion to a CPE model; the second, which is
a high-probability bound, holds in a setting where one is given a ranking model f and a data sample
S drawn iid from D for finding a suitable conversion. We will need the following definition:
Definition 9 (Calibrated CPE model). A binary CPE model ?b : X ? [0, 1] is said to be calibrated
w.r.t. a probability distribution D on X ? {?1} if
P(y = 1 | ?b(x) = u) = u, ?u ? range(b
? ),
where range(b
? ) denotes the range of ?b.
We will make use of the following result, which follows from results in [20] and shows that the
squared error of a calibrated CPE model is related to the expected cost-sensitive error of a classifier
constructed using the optimal threshold in Definition 3, over uniform costs in (0, 1):
Theorem 10 ( [20]). Let ?b : X ? [0, 1] be a binary CPE model that is calibrated w.r.t. D. Then
ersq
ThreshD,b?,c ? ?b ,
b ] = 2 Ec?U (0,1) er0-1,c
D
D[ ?
where U (0, 1) is the uniform distribution over (0, 1) and ThreshD,b?,c is as defined in Definition 3.
The proof of Theorem 10 follows from the fact that for any CPE model ?b that is calibrated w.r.t. D,
the optimal classification transform is given by ThreshD,b?,c (u) = sign(u ? c), thus generalizing a
similar result noted earlier for the true class probability function ?.
We then have the following result, which shows that for a calibrated CPE model ?b : X?[0, 1], one
can upper bound the squared-error regret in terms of the bipartite ranking regret; this result follows
directly from Theorem 10 and Theorem 4:
Lemma 11 (Regret transfer bound for calibrated CPE models). Let ?b : X ? [0, 1] be a binary
CPE model that is calibrated w.r.t. D. Then q
regretsq
b] ?
8p(1 ? p) regretrank
b] .
D[ ?
D [?
We are now ready to describe the construction of the optimal CPE transform in the idealized setting.
We will find it convenient
n to define the following set:
o
Ginc =
g : R?[0, 1] : g is a monotonically increasing function .
Definition 12 (Optimal CPE transform). Let f : X ? [a, b] (where a, b ? R, a < b) be any
bounded-range ranking model and D be any probability distribution over X ? {?1} such that
(D, f ) satisfies Assumption A. Moreover assume that ?f (see Assumption A), if mixed, does not
have a point mass at the end-points a, b, and that the function ?f : [a, b]?[0, 1] defined as ?f (t) =
P(y = 1 | f (x) = t) is square-integrable w.r.t. the density of the continuous part of ?f . Define
an optimal CPE transform CalD,f as any monotonically increasing function from R to [0, 1] such
that the CPE model ?b(x) = CalD,f (f (x)) resulting from composing f with CalD,f yields minimum
squared error on D (see appendix for existence of CalD,f under these conditions):
CalD,f ? argming?Ginc ersq
.
(OP3)
D g?f
5
Lemma 13 (Properties of CalD,f ). Let (D, f ) satisfy the conditions of Definition 12. Then
1. (CalD,f ? f ) is calibrated w.r.t. D.
2. errank
CalD,f ? f ? errank
D
D [f ].
The proof of Lemma 13 is based on equivalent results for the minimizer of a sample version of
(OP3) [24, 25]. Combining this with Lemma 11 immediately gives the following result:
Theorem 14 (Idealized weak regret transfer bound from bipartite ranking to binary CPE
based on distribution). Let (D, f ) satisfy the conditions of Definition 12. Then the CPE model
?b(x) = CalD,f (f (x)) obtained by composing f with CalD,f satisfies
q
regretsq
Cal
?
f
?
8p(1 ? p) regretrank
D,f
D
D [f ] .
We now derive a sample version of the above result.
Definition 15 (Optimal sample-based CPE transform). For any ranking model f : X ? R
n
c
and sample S ? ??
n=1 (X ? {?1}) , define an optimal sample-based transform CalS,f as any
c S,f (f (x))
monotonically increasing function from R to [0, 1] such that the CPE model ?b(x) = Cal
c
resulting from composing f with CalS,f yields minimum squared error on S:
sq
c S,f ? argmin
er g ? f ,
(OP4)
Cal
g?Ginc
S
where ersq
b ] denotes the squared error of a CPE model ?b on the empirical distribution associated
S[?
with S (i.e. the uniform distribution over examples in S).
The above optimization problem corresponds to the well-known isotonic regression problem and
can be solved in O(n ln n) time using the pool adjacent violators (PAV) algorithm [16] (the PAV
algorithm outputs a score in [0, 1] for each instance in S such that these scores preserve the ordering
of f ; a straightforward interpolation of the scores then yields a monotonically increasing function
of f ). We then have the following sample-based weak regret transfer result:
Theorem 16 (Sample-based weak regret transfer bound from bipartite ranking to binary
CPE). Let D be any probability distribution on X ? {?1} and f : X ? [a, b] be any fixed
ranking model such that (D, f ) satisfies the conditions of Definition 12. Let S ? (X ? {?1})n
be drawn randomly according to Dn . Let 0 < ? ? 1. Then with probability at least 1 ? ? (over
c S,f (f (x)) obtained by composing f with Cal
c S,f
the draw of S ? Dn ), the CPE model ?b(x) = Cal
satisfies
s
r
q
2 ln 8?
2
ln(n)
sq c
rank
8p(1 ? p) regretD [f ] + 96
+2
.
regretD CalS,f ? f ?
n
n
The proof of Theorem 16 involves an application of the idealized result in Theorem 14, together with
a standard uniform convergence argument based on Rademacher averages applied to the function
class Ginc ; for this, we make use of a result on the Rademacher complexity of this class [21].
Remark 17. As in the case of binary classification, we note that, as a consequence of Theorem 16,
one can use any learning algorithm that is statistically consistent for bipartite ranking to construct an
algorithm that is consistent for binary CPE as follows: divide the training data into two (say equal)
parts, use one part for learning a ranking model using the consistent ranking algorithm, and the other
part for selecting a CPE transform on the learned ranking model; both terms in Theorem 16 will then
go to zero as the training sample size increases, yielding consistency for binary CPE.
Remark 18. We note a recent result in [19] giving a bound on the empirical squared error of a CPE
model constructed from a ranking model using isotonic regression in terms of the empirical ranking
error of the ranking model. However, this does not amount to a regret transfer bound.
Remark 19. Finally, we note that the quantity Ec?U (0,1) er0-1,c
ThreshD,b?,c ? ?b that appears in
D
Theorem 10 is also the area under the cost curve [20, 22]; since this quantity is upper bounded in
terms of regretrank
D [f ] by virtue of Theorem 4, we also get a weak regret transfer bound from bipartite
ranking to problems where the area under the cost curve is a performance measure of interest. In
particular, this implies that algorithms that are statistically consistent with respect to AUC can also
be used to construct algorithms that are statistically consistent w.r.t. the area under the cost curve.
6
0.7
0.7
Upper bound
Ranking + Opt. Thres. Choice
0.5
0?1 regret
Upper bound
Ranking + Calibration
0.6
Squared regret
0.6
0.4
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
0
0
2
10
3
4
10
10
No. of training examples
5
2
10
10
(a)
3
4
10
10
No. of training examples
5
10
(b)
Figure 2: Results on synthetic data. A ranking model was learned using a pairwise linear logistic regression ranking algorithm (which is a consistent ranking algorithm for the distribution used in these
experiments); this was followed by an optimal choice of classification threshold (with c = 12 ) or optimal CPE transform based on the distribution as outlined in Sections 3 and 4. The plots show (a)
0-1 classification regret of the resulting classification model together with the corresponding upper
bound from Theorem 4; and (b) squared-error regret of the resulting CPE model together with the
corresponding upper bound from Theorem 14. As can be seen, in both cases, the classification/CPE
regret converges to zero as the training sample size increases.
5
Experiments
We conducted two types of experiments to evaluate the results described in this paper: the first
involved synthetic data drawn from a known distribution for which the classification and ranking
regrets could be calculated exactly; the second involved real data from the UCI Machine Learning
Repository. In the first experiment, we learned ranking models using a consistent ranking algorithm
on increasing training sample sizes, converted the learned models using the optimal threshold or
CPE transforms described in Sections 3 and 4 based on the distribution, and verified that this yielded
classification and CPE models with 0-1 classification regret and squared-error regret converging to
zero. In the second experiment, we simulated a setting where a ranking model has been learned
from some data, the original training data is no longer available, and a classification/CPE model is
needed; we investigated whether in such a setting the ranking model could be used in conjunction
with a small additional data sample to produce a useful classification or CPE model.
5.1
Synthetic Data
Our first goal was to verify that using ranking models learned by a statistically consistent ranking
algorithm and applying the distribution-based transformations described in Sections 3 and 4 yields
classification/CPE models with classification/CPE regret converging to zero. For these experiments,
we generated examples in (X = Rd ) ? {?1} (with d = 100) as follows: each example was
assigned a positive/negative label with equal probability, with the positive instances drawn from a
multivariate Gaussian distribution with mean ? ? Rd and covariance matrix ? ? Rd?d , and negative
instances drawn from a multivariate Gaussian distribution with mean ?? and the same covariance
matrix ?; here ? was drawn uniformly at random from {?1, 1}d , and ? was drawn from a Wishart
distribution with 200 degrees of freedom and a randomly drawn invertible PSD scale matrix. For this
distribution, the optimal ranking and classification models are linear. Training samples of various
sizes n were generated from this distribution; in each case, a linear ranking model was learned
using
?
a pairwise linear logistic regression algorithm (with regularization parameter set to 1/ n), and an
optimal threshold (with c = 12 ) or CPE transform was then applied to construct a binary classification
or CPE model. In this case the ranking regret and 0-1 classification regret of a linear model and can
be computed exactly; the squared-error regret for the CPE model was computed approximately by
sampling instances from the distribution. The results are shown in Figure 2. As can be seen, the
classification and squared-error regrets of the classification and CPE models constructed both satisfy
the bounds from Theorems 4 and 14, and converge to zero as the bounds suggest.
5.2
Real Data
Our second goal was to investigate whether good classification and CPE models can be constructed
in practice by applying the data-based transformations described in Sections 3 and 4 to an existing
ranking model. For this purpose, we conducted experiments on several data sets drawn from the UCI
Machine Learning Repository3 . We present representative results on two data sets: Spambase (4601
3
http://archive.ics.uci.edu/ml/
7
0.08
0.06
0.05
0.04
0.03
0.06
10
20
30 40 50 60 70 80
%. of training examples
(a) Spambase (0-1)
90 100
0.02
10
0.06
Empr. Calibration
Logistic Regression
0.11
Squared error
0.1
(Fixed) Ranking error = 0.0317
0.12
Empr. Thres. Choice
Logistic Regression
0.07
0?1 error
0?1 error
0.12
(Fixed) Ranking error = 0.0179
0.08
Empr. Thres. Choice
Logistic Regression
0.14
0.1
Squared error
(Fixed) Ranking error = 0.0317
0.16
0.09
0.08
0.07
(Fixed) Ranking error = 0.0179
0.05
Empr. Calibration
Logistic Regression
0.04
0.03
0.06
20
30 40 50 60 70 80
%. of training examples
90 100
(b) Internet-ads (0-1)
0.05
10
20
30 40 50 60 70 80
%. of training examples
90 100
(c) Spambase (CPE)
0.02
10 20 30 40 50 60 70 80 90 100
%. of training examples
(d) Internet-ads (CPE)
Figure 3: Results on real data from the UCI repository. A ranking model was learned using a pairwise linear logistic regression ranking algorithm from a part of the data set that was then discarded.
The remaining data was divided into training and test sets. The training data was then used to estimate an empirical (sample-based) classification threshold and CPE transform (calibration) for this
ranking model as outlined in Sections 3 and 4. Using the same training data, a binary classifier and
CPE model were also learned from scratch using a standard linear logistic regression algorithm. The
plots show the resulting test error for both approaches. As can be seen, if only a small amount of
additional data is available, then using this data to convert an existing ranking model into a classification/CPE model is more beneficial than learning a classification/CPE model from scratch.
instances, 57 features) and Internet Ads (3279 instances, 1554 features4 ). Here we divided each
data set into three equal parts. One part was used to learn a ranking model using a pairwise linear
logistic regression algorithm, and was then discarded. This allowed us to simulate a situation where
a (reasonably good) ranking model is available, but the original training data used to learn the model
is no longer accessible. Various subsets of the second part of the data (of increasing size) were then
used to estimate a data-based threshold or CPE transform on this ranking model using the optimal
sample-based methods described in Sections 3 and 4. The performance of the constructed classification and CPE models on the third part of the data, which was held out for testing purposes, is
shown in Figure 3. For comparison, we also show the performance of binary classification and CPE
models learned directly from the same subsets of the second part of the data using a standard linear
logistic regression algorithm. In each case, the regularization parameter for both standard logistic
regression and pairwise logistic regression was chosen from {10?4 , 10?3 , 10?2 , 10?1 , 1, 10, 102 }
using 5-fold cross validation on the corresponding training data. As can be seen, when one has
access to a previously learned (or otherwise available) ranking model with good ranking performance, and only a small amount of additional data, then one is better off using this data to estimate
a threshold/CPE transform and converting the ranking model into a classification/CPE model, than
learning a classification/CPE model from this data from scratch. However, as can also be seen, the
eventual performance of the classification/CPE model thus constructed is limited by the ranking performance of the original ranking model; therefore, once there is sufficient additional data available,
it is advisable to use this data to learn a new model from scratch.
6
Conclusion
We have investigated the relationship between three fundamental problems in machine learning:
binary classification, bipartite ranking, and binary class probability estimation (CPE). While formal
regret transfer bounds from binary CPE to binary classification and to bipartite ranking are known,
little has been known about other directions. We have introduced the notion of weak regret transfer
bounds that require access to a distribution or data sample, and have established the existence of
such bounds from bipartite ranking to binary classification and to binary CPE. The latter result
makes use of ideas related to calibration and isotonic regression; while these ideas have been used
to calibrate scores from real-valued classifiers to construct probability estimates in practice, to our
knowledge, this is the first use of such ideas in deriving formal regret bounds in relation to ranking.
Our experimental results demonstrate possible uses of the theory developed here.
Acknowledgments
Thanks to Karthik Sridharan for pointing us to a result on monotonically increasing functions.
Thanks to the anonymous reviewers for many helpful suggestions. HN gratefully acknowledges
support from a Google India PhD Fellowship. SA thanks the Department of Science & Technology
(DST), the Indo-US Science & Technology Forum (IUSSTF), and Yahoo! for their support.
4
The original data set contains 1558 features; we discarded 4 features with missing entries.
8
References
[1] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56?134, 2004.
[2] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification and risk bounds. Journal of the
American Statistical Association, 101(473):138?156, 2006.
[3] M. D. Reid and R. C. Williamson. Surrogate regret bounds for proper losses. In ICML, 2009.
[4] C. Scott. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958?992, 2012.
[5] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[6] C. Cortes and M. Mohri. AUC optimization vs. error rate minimization. In Advances in Neural Information Processing Systems 16. MIT Press, 2004.
[7] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the area under
the ROC curve. Journal of Machine Learning Research, 6:393?425, 2005.
[8] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of U-statistics. Annals of
Statistics, 36:844?874, 2008.
[9] A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation: Structure and
applications. Technical report, University of Pennsylvania, November 2005.
[10] M. D. Reid and R. C. Williamson. Composite binary losses. Journal of Machine Learning Research,
11:2387?2422, 2010.
[11] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996.
[12] S. Cl?emenc?on and S. Robbiano. Minimax learning rates for bipartite ranking and plug-in rules. In
Proceedings of the 28th International Conference on Machine Learning, 2011.
[13] John Langford and Bianca Zadrozny. Estimating class membership probabilities using classifier learners.
In AISTATS, 2005.
[14] C. Rudin and R.E. Schapire. Margin-based ranking and an equivalence between adaboost and rankboost.
Journal of Machine Learning Research, 10:2193?2232, 2009.
[15] S?. Ertekin and C. Rudin. On equivalence relationships between classification and ranking algorithms.
Journal of Machine Learning Research, 12:2905?2929, 2011.
[16] M. Ayer, H.D. Brunk, G. M. Ewing, W. T. Reid, and E. Silverman. An empirical distribution function for
sampling with incomplete information. The Annals of Mathematical Statistics, 26(4):641?647, 1955.
[17] H. D. Brunk. On the estimation of parameters restricted by inequalities. The Annals of Mathematical
Statistics, 29(2):437?454, 1958.
[18] B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates.
In KDD, 2002.
[19] A.K. Menon, X. Jiang, S. Vembu, C. Elkan, and L. Ohno-Machado. Predicting accurate probabilities with
a ranking loss. In ICML, 2012.
[20] J. Hern?andez-Orallo, P. Flach, and C. Ferri. A unified view of performance metrics: Translating threshold
choice into expected classification loss. Journal of Machine Learning Research, 13:2813?2869, 2012.
[21] P. Bartlett. CS281B/Stat241B (Spring 2008) Statistical Learning Theory [Lecture 19 notes], University of
California, Berkeley. http://www.cs.berkeley.edu/?bartlett/courses/281b-sp08/
19.pdf. 2008.
[22] C. Drummond and R.C. Holte. Cost curves: An improved method for visualizing classifier performance.
Machine Learning, 65(1):95?130, 2006.
[23] M.A. Maloof. Learning when data sets are imbalanced and when costs are unequal and unknown. In
ICML-2003 Workshop on Learning from Imbalanced Data Sets II, volume 2, 2003.
[24] A.T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT, 2009.
[25] T. Fawcett and A. Niculescu-Mizil. PAV and the ROC convex hull. Machine Learning, 68(1):97?106,
2007.
[26] S. Agarwal. Surrogate regret bounds for the area under the ROC curve via strongly proper losses. In
COLT, 2013.
[27] D. Anevski and P. Soulier. Monotone spectral density estimation. Annals of Statistics, 39(1):418?438,
2011.
[28] P. Groeneboom and G. Jongbloed. Generalized continuous isotonic regression. Statistics & Probability
Letters, 80(34):248?253, 2010.
9
| 4907 |@word cpe:88 mild:1 repository:2 version:2 flach:1 covariance:2 thres:3 minus:1 reduction:1 contains:1 score:11 selecting:3 document:2 spambase:3 existing:3 current:1 must:3 john:1 kdd:1 plot:2 v:1 fewer:1 rudin:2 provides:1 boosting:2 preference:1 herbrich:1 zhang:1 mathematical:2 dn:4 er0:11 constructed:7 incorrect:1 introduce:2 pairwise:5 x0:6 expected:5 behavior:1 little:1 increasing:12 abound:1 iisc:1 estimating:1 bounded:3 underlying:2 moreover:2 mass:2 what:2 argmin:2 minimizes:3 narasimhan:1 developed:1 unified:1 finding:4 transformation:3 guarantee:1 berkeley:2 tie:1 exactly:2 classifier:21 medical:1 maloof:1 yn:1 mcauliffe:1 reid:3 positive:4 understood:2 consequence:3 jiang:1 interpolation:1 approximately:1 lugosi:2 might:2 equivalence:3 suggests:1 limited:2 range:4 statistically:10 gyorfi:1 acknowledgment:1 testing:1 practice:5 regret:52 silverman:1 sq:5 stuetzle:1 area:7 empirical:10 composite:1 convenient:2 suggest:1 get:4 cannot:1 cal:5 risk:3 applying:2 isotonic:6 www:1 equivalent:3 reviewer:1 missing:1 roth:1 go:2 attention:1 straightforward:1 emenc:2 convex:2 shen:1 assigns:1 immediately:1 rule:2 estimator:1 deriving:1 notion:3 justification:1 argming:1 annals:5 construction:1 exact:1 us:1 elkan:2 recognition:1 continues:1 asymmetric:1 labeled:2 solved:1 ordering:1 transforming:3 broken:1 complexity:2 convexity:1 peled:1 bipartite:36 learner:1 easily:1 various:2 distinct:1 describe:1 whose:1 valued:1 say:4 otherwise:4 statistic:8 transform:18 product:1 relevant:1 combining:2 uci:4 drummond:1 convergence:2 rademacher:3 produce:1 converges:2 object:4 derive:6 advisable:1 finitely:1 received:1 sa:1 strong:3 c:1 involves:2 implies:1 direction:2 hull:1 vc:2 translating:1 cals:3 require:2 generalization:1 andez:1 preliminary:2 anonymous:1 opt:1 strictly:1 hold:5 lying:1 ic:1 mapping:2 predict:1 pointing:1 purpose:2 estimation:8 label:6 sensitive:13 minimization:4 mit:1 gaussian:2 rankboost:1 rather:1 kalai:1 conjunction:1 rank:8 ferri:1 contrast:1 sense:3 helpful:1 membership:1 niculescu:1 typically:1 relation:1 interested:3 classification:78 colt:2 yahoo:1 misclassifies:1 ewing:1 ernet:1 marginal:1 equal:4 construct:9 once:2 sampling:2 icml:3 report:1 bangalore:1 randomly:5 preserve:1 consisting:1 isotron:1 karthik:1 psd:1 freedom:1 interest:1 investigate:2 yielding:2 held:1 har:1 implication:1 accurate:2 op3:2 incomplete:1 divide:2 instance:10 earlier:1 calibrate:2 cost:33 subset:2 entry:1 uniform:6 conducted:2 synthetic:4 calibrated:11 thanks:3 density:2 fundamental:4 international:1 accessible:1 probabilistic:1 off:2 pool:1 invertible:1 together:4 squared:18 again:2 hn:1 wishart:1 american:1 style:1 actively:1 suggesting:1 converted:5 op1:1 summarized:1 automation:1 satisfy:4 ranking:120 depends:2 idealized:6 ad:3 later:1 view:1 start:2 bayes:2 square:1 yield:8 weak:19 iid:4 email:2 definition:11 involved:2 proof:7 mi:1 associated:2 knowledge:3 graepel:1 harikrishna:2 appears:1 higher:2 adaboost:1 ayer:1 brunk:2 erd:13 improved:1 though:2 strongly:1 langford:1 google:1 widespread:1 logistic:12 costsensitive:1 menon:1 calibrating:1 cald:10 true:2 verify:1 regularization:2 assigned:1 visualizing:1 adjacent:2 pav:3 during:1 auc:6 noted:2 anything:1 generalized:1 pdf:1 demonstrate:2 ranging:1 recently:2 machado:1 volume:1 association:1 vembu:1 orallo:1 rd:3 consistency:3 outlined:2 similarly:1 sastry:1 gratefully:1 access:9 calibration:5 longer:2 multivariate:2 imbalanced:2 recent:4 thresh:1 irrelevant:1 inf:4 regretd:4 inequality:1 binary:65 yi:1 integrable:1 seen:9 minimum:4 additional:6 holte:1 op2:2 converting:1 converge:1 monotonically:8 ii:1 relates:1 technical:2 plug:1 cross:1 retrieval:1 divided:2 converging:2 involving:1 regression:18 vision:1 metric:1 fawcett:1 agarwal:3 achieved:3 eter:1 vayatis:1 background:2 want:1 fellowship:1 ertekin:1 archive:1 induced:1 sridharan:1 jordan:1 threshd:10 pennsylvania:1 idea:6 multiclass:1 whether:2 bartlett:3 remark:5 generally:1 useful:1 detailed:1 involve:1 se:1 amount:3 transforms:1 shivani:2 category:1 http:2 repository3:1 schapire:2 sign:14 estimated:3 per:1 diagnosis:1 discrete:1 nevertheless:1 threshold:21 drawn:16 verified:1 thresholded:1 monotone:1 year:1 convert:2 letter:1 dst:1 electronic:1 draw:3 appendix:2 bound:44 internet:3 followed:1 fold:1 robbiano:1 yielded:1 ohno:1 simulate:1 argument:2 min:3 spring:1 relatively:1 department:2 structured:1 according:2 belonging:1 smaller:1 beneficial:1 restricted:1 samplebased:1 ln:6 previously:2 hern:1 turn:1 needed:3 singer:1 end:2 available:5 spectral:1 existence:2 original:4 denotes:5 remaining:1 include:1 giving:1 classical:2 forum:1 question:1 quantity:2 surrogate:4 said:1 simulated:1 trivial:2 devroye:1 relationship:13 negative:4 proper:2 unknown:2 conversion:2 upper:6 discarded:3 november:1 t:7 zadrozny:2 situation:1 y1:1 buja:1 introduced:1 namely:1 required:3 pair:1 california:1 unequal:1 learned:15 established:2 beyond:1 below:3 pattern:1 scott:1 suitable:7 misclassification:2 ranked:2 predicting:1 mizil:1 minimax:1 technology:2 ready:1 acknowledges:1 relative:1 freund:1 loss:7 lecture:1 mixed:2 suggestion:1 validation:1 groeneboom:1 degree:1 sufficient:1 consistent:16 thresholding:5 course:1 mohri:1 surprisingly:1 formal:3 weaker:1 india:2 institute:1 curve:8 dimension:2 xn:1 evaluating:1 calculated:1 spam:2 far:1 ec:2 argmint:1 ml:1 assumed:1 xi:2 continuous:3 learn:14 transfer:28 reasonably:1 composing:5 csa:1 williamson:2 investigated:3 necessarily:2 cl:2 constructing:1 aistats:1 main:3 allowed:1 x1:1 paramn:1 representative:1 roc:4 bianca:1 probing:1 indo:1 third:1 theorem:23 specific:1 er:2 cortes:1 virtue:1 workshop:1 false:2 phd:1 iyer:1 margin:1 sorting:1 generalizing:1 simply:3 springer:1 corresponds:1 minimizer:1 satisfies:10 violator:1 goal:9 sorted:1 eventual:1 included:1 specifically:4 uniformly:3 lemma:4 experimental:2 formally:1 select:1 support:2 latter:1 indian:1 evaluate:1 scratch:7 ex:5 |
4,318 | 4,908 | From Bandits to Experts:
A Tale of Domination and Independence
Noga Alon
Tel-Aviv University, Israel
[email protected]
Nicol`o Cesa-Bianchi
Universit`a degli Studi di Milano, Italy
[email protected]
Claudio Gentile
University of Insubria, Italy
[email protected]
Yishay Mansour
Tel-Aviv University, Israel
[email protected]
Abstract
We consider the partial observability model for multi-armed bandits, introduced
by Mannor and Shamir [14]. Our main result is a characterization of regret in
the directed observability model in terms of the dominating and independence
numbers of the observability graph (which must be accessible before selecting an
action). In the undirected case, we show that the learner can achieve optimal regret
without even accessing the observability graph before selecting an action. Both
results are shown using variants of the Exp3 algorithm operating on the observability graph in a time-ef?cient manner.
1
Introduction
Prediction with expert advice ?see, e.g., [13, 16, 6, 10, 7]? is a general abstract framework for
studying sequential prediction problems, formulated as repeated games between a player and an
adversary. A well studied example of prediction game is the following: In each round, the adversary
privately assigns a loss value to each action in a ?xed set. Then the player chooses an action (possibly
using randomization) and incurs the corresponding loss. The goal of the player is to control regret,
which is de?ned as the excess loss incurred by the player as compared to the best ?xed action over
a sequence of rounds. Two important variants of this game have been studied in the past: the expert
setting, where at the end of each round the player observes the loss assigned to each action for that
round, and the bandit setting, where the player only observes the loss of the chosen action, but not
that of other actions.
Let K be the number of available actions, and T?be the number of prediction rounds. The best
possible regret for the expert setting is of order T log K. This optimal rate is achieved by the
Hedge algorithm [10] or the
? Follow the Perturbed Leader algorithm [12]. In the bandit setting, the
optimal regret is of order T K, achieved by the INF algorithm [2]. ?A bandit variant of Hedge,
called Exp3 [3], achieves a regret with a slightly worse bound of order T K log K.
Recently, Mannor and Shamir [14] introduced an elegant way for de?ning intermediate observability
models between the expert setting (full observability) and the bandit setting (single observability).
An intuitive way of representing an observability model is through a directed graph over actions:
an arc1 from action i to action j implies that when playing action i we get information also about
the loss of action j. Thus, the expert setting is obtained by choosing a complete graph over actions
(playing any action reveals all losses), and the bandit setting is obtained by choosing an empty edge
set (playing an action only reveals the loss of that action).
1
According to the standard terminology in directed graph theory, throughout this paper a directed edge will
be called an arc.
1
The main result of [14] concerns undirected observability graphs. The regret is characterized in
terms?of the independence number ? of the undirected observability graph. Speci?cally, they prove
that T ? log K is the optimal regret (up to logarithmic factors) and show that a variant of Exp3,
called ELP, achieves this bound when the graph is known ahead of time, where ? ? {1? . . . ? K}
interpolates between full observability (? = 1 for the clique) and single observability (? = K for
the graph with no edges). Given the observability graph, ELP runs a linear program to compute the
desired distribution over actions. In the case when the graph changes over time,?
and at each time
?T
step ELP observes the current observability graph before prediction, a bound of
t=1 ?t log K
is shown, where ?t is the independence number of the graph at time t. A major problem left open
in [14] was the characterization of regret for directed observability graphs, a setting for which they
only proved partial results.
Our main result is a full characterization (to within logarithmic factors) of regret in the case of directed observability graphs. Our upper bounds are proven using a new algorithm, called Exp3-DOM.
This algorithm is ef?cient to run even when the graph changes over time: it just needs to compute
a small dominating set of the current observability graph (which must be given as side information) before prediction.2 As in the undirected case, the regret for the directed case is characterized in
terms of the independence numbers of the observability graphs (computed ignoring edge directions).
We arrive at this result by showing that a key quantity emerging in the analysis of Exp3-DOM can
be bounded in terms of the independence numbers of the graphs. This bound (Lemma 13 in the
appendix) is based on a combinatorial construction which might be of independent interest.
We also explore the possibility of the learning algorithm receiving the observability graph only after
prediction, and not before. For this setting, we introduce a new variant of Exp3, called Exp3-SET,
which achieves the same regret as ELP for undirected graphs, but without the need of accessing the
current observability graph before each prediction. We show that in some random directed graph
models Exp3-SET has also a good performance. In general, we can upper bound the regret of Exp3SET as a function of the maximum acyclic subgraph of the observability graph, but this upper bound
may not be tight. Yet, Exp3-SET is much simpler and computationally less demanding than ELP,
which needs to solve a linear program in each round.
There are a variety of real-world settings where partial observability models corresponding to directed and undirected graphs are applicable. One of them is route selection. We are given a graph
of possible routes connecting cities: when we select a route r connecting two cities, we observe the
cost (say, driving time or fuel consumption) of the ?edges? along that route and, in addition, we have
complete information on any sub-route r? of r, but not vice versa. We abstract this in our model by
having an observability graph over routes r, and an arc from r to any of its sub-routes r? .3
Sequential prediction problems with partial observability models also arise in the context of recommendation systems. For example, an online retailer, which advertises products to users, knows that
users buying certain products are often interested in a set of related products. This knowledge can be
represented as a graph over the set of products, where two products are joined by an edge if and only
if users who buy any one of the two are likely to buy the other as well. In certain cases, however,
edges have a preferred orientation. For instance, a person buying a video game console might also
buy a high-def cable to connect it to the TV set. Vice versa, interest in high-def cables need not
indicate an interest in game consoles.
Such observability models may also arise in the case when a recommendation system operates in
a network of users. For example, consider the problem of recommending a sequence of products,
or contents, to users in a group. Suppose the recommendation system is hosted on an online social network, on which users can befriend each other. In this case, it has been observed that social
relationships reveal similarities in tastes and interests [15]. However, social links can also be asymmetric (e.g., followers of celebrities). In such cases, followers might be more likely to shape their
preferences after the person they follow, than the other way around. Hence, a product liked by a
celebrity is probably also liked by his/her followers, whereas a preference expressed by a follower
is more often speci?c to that person.
2
Computing an approximately minimum dominating set can be done by running a standard greedy set cover
algorithm, see Section 2.
3
Though this example may also be viewed as an instance of combinatorial bandits [8], the model studied
here is more general. For example, it does not assume linear losses, which could arise in the routing example
from the partial ordering of sub-routes.
2
2
Learning protocol? notation? and preliminaries
As stated in the introduction, we consider an adversarial multi-armed bandit setting with a ?nite
action set V = {1? . . . ? K}. At each time t = 1? 2? . . . , a player (the ?learning algorithm?) picks
some action It ? V and incurs a bounded loss ?It ?t ? [0? 1]. Unlike the standard adversarial bandit
problem [3, 7], where only the played action It reveals its loss ?It ?t , here we assume all the losses
in a subset SIt ?t ? V of actions are revealed after It is played. More formally, the player observes
the pairs (i? ?i?t ) for each i ? SIt ?t . We also assume i ? Si?t for any i and t, that is, any action
reveals its own loss when played. Note that the bandit setting (Si?t = {i}) and the expert setting
(Si?t = V ) are both special cases of this framework. We call Si?t the observation set of action i at
t
time t, and write i ?
? j when at time t playing action i also reveals the loss of action j. Hence,
t
? j}. The family of observation sets {Si?t }i?V we collectively call the
Si?t = {j ? V : i ?
observation system at time t.
The adversaries we consider are nonoblivious. Namely, each loss ?i?t at time t can be an arbitrary
function of the past player?s actions I1 ? . . . ? It?1 . The performance of a player A is measured
through the regret
?
?
max ? LA?T ? Lk?T
k?V
where LA?T = ?I1 ?1 + ? ? ? + ?IT ?T and Lk?T = ?k?1 + ? ? ? + ?k?T are the cumulative losses of the
player and of action k, respectively. The expectation is taken with respect to the player?s internal
randomization (since losses are allowed to depend on the player?s past random actions, also Lk?t
may be random).4 The observation system {Si?t }i?V is also adversarially generated, and each Si?t
can be an arbitrary function of past player?s actions, just like losses are. However, in Section 3 we
also consider a variant in which the observation system is randomly generated according to a speci?c
stochastic model.
Whereas some algorithms need to know the observation system at the beginning of each step t,
others need not. From this viewpoint, we consider two online learning settings. In the ?rst setting,
called the informed setting, the full observation system {Si?t }i?V selected by the adversary is made
available to the learner before making its choice It . This is essentially the ?side-information? framework ?rst considered in [14]. In the second setting, called the uninformed setting, no information
whatsoever regarding the time-t observation system is given to the learner prior to prediction. We
?nd it convenient to adopt the same graph-theoretic interpretation of observation systems as in [14].
At each step t = 1? 2? . . . , the observation system {Si?t }i?V de?nes a directed graph Gt = (V? Dt ),
where V is the set of actions, and Dt is the set of arcs, i.e., ordered pairs of nodes. For j ?= i, arc
t
t
(i? j) ? Dt if and only if i ?
? j (the self-loops created by i ?
? i are intentionally ignored). Hence,
we can equivalently de?ne {Si?t }i?V in terms of Gt . Observe that the outdegree d+
i of any i ? V
equals |Si?t | ? 1. Similarly, the indegree d?
of
i
is
the
number
of
action
j
=
?
i
such
that
i ? Sj?t (i.e.,
i
t
such that j ?
? i). A notable special case of the above is when the observation system is symmetric
over time: j ? Si?t if and only if i ? Sj?t for all i? j and t. In words, playing i at time t reveals the
loss of j if and only if playing j at time t reveals the loss of i. A symmetric observation system is
equivalent to Gt being an undirected graph or, more precisely, to a directed graph having, for every
pair of nodes i? j ? V , either no arcs or length-two directed cycles. Thus, from the point of view
of the symmetry of the observation system, we also distinguish between the directed case (Gt is a
general directed graph) and the symmetric case (Gt is an undirected graph for all t).
The analysis of our algorithms depends on certain properties of the sequence of graphs Gt . Two
graph-theoretic notions playing an important role here are those of independent sets and dominating
sets. Given an undirected graph G = (V? E), an independent set of G is any subset T ? V such
that no two i? j ? T are connected by an edge in E. An independent set is maximal if no proper
superset thereof is itself an independent set. The size of a largest (maximal) independent set is the
independence number of G, denoted by ?(G). If G is directed, we can still associate with it an
independence number: we simply view G as undirected by ignoring arc orientation. If G = (V? D)
is a directed graph, then a subset R ? V is a dominating set for G if for all j ?? R there exists
some i ? R such that arc (i? j) ? D. In our bandit setting, a time-t dominating set Rt is a subset of
actions with the property that the loss of any remaining action in round t can be observed by playing
4
Although we de?ned the problem in terms of losses, our analysis can be applied to the case when actions
return rewards gi?t ? [0? 1] via the transformation ?i?t = 1 ? gi?t .
3
Algorithm 1: Exp3-SET algorithm (for the uninformed setting)
Parameter: ? ? [0? 1]
Initialize: wi?1 = 1 for all i ? V = {1? . . . ? K}
For t = 1? 2? . . . :
1. Observation system {Si?t }i?V is generated but not disclosed ;
?
wi?t
2. Set pi?t =
for each i ? V , where Wt =
wj?t ;
Wi?t
j?V
3. Play action It drawn according to distribution pt = (p1?t ? . . . ? pK?t ) ;
4. Observe pairs (i? ?i?t ) for all i ? SIt ?t ;
5. Observation system {Si?t }i?V is disclosed ;
?
?
6. For any i ? V set wi?t+1 = wi?t exp ?? ??i?t , where
?i?t
I{i ? SIt ?t }
??i?t =
qi?t
and
qi?t =
?
pj?t .
t
j : j?
?i
some action in Rt . A dominating set is minimal if no proper subset thereof is itself a dominating set.
The domination number of directed graph G, denoted by ?(G), is the size of a smallest (minimal)
dominating set of G.
Computing a minimum dominating set for an arbitrary directed graph Gt is equivalent to solving a
minimum set cover problem on the associated observation system {Si?t }i?V . Although minimum
set cover is NP-hard, the well-known Greedy Set Cover algorithm [9], which repeatedly selects
from {Si?t }i?V the set containing the largest number of uncovered elements so far, computes a
dominating set Rt such that |Rt | ? ?(Gt ) (1 + ln K).
Finally, we can also lift the notion of independence number of an undirected graph to directed graphs
through the notion of maximum acyclic subgraphs: Given a directed graph G =? (V? D), ?an acyclic
subgraph of G is any graph G? = (V ? ? D? ) such that V ? ? V , and D? = D ? V ? ? V ? , with no
(directed) cycles. We denote by mas(G) = |V ? | the maximum size of such V ? . Note that when G
is undirected (more precisely, as above, when G is a directed graph having for every pair of nodes
i? j ? V either no arcs or length-two cycles), then mas(G) = ?(G), otherwise mas(G) ? ?(G).
In particular, when G is itself a directed acyclic graph, then mas(G) = |V |.
3
Algorithms without Explicit Exploration: The Uninformed Setting
In this section, we show that a simple variant of the Exp3 algorithm [3] obtains optimal regret (to
within logarithmic factors) in the symmetric and uninformed setting. We then show that even the
harder adversarial directed setting lends itself to an analysis, though with a weaker regret bound.
Exp3-SET (Algorithm 1) runs Exp3 without mixing with the uniform distribution. Similar to Exp3,
Exp3-SET uses loss estimates ??i?t that divide each observed loss ?i?t by the probability qi?t of obt
serving it. This probability qi?t is simply the sum of all pj?t such that j ?
? i (the sum includes pi?t ).
Next, we bound the regret of Exp3-SET in terms of the key quantity
? pi?t
?
p
? i?t
Qt =
=
.
(1)
qi?t
pj?t
t
i?V
i?V
j : j?
?i
Each term pi?t /qi?t can be viewed as the probability of drawing i from pt conditioned on the event
that i was observed. Similar to [14], a key aspect to our analysis is the ability to deterministically and
nonvacuously5 upper bound Qt in terms of certain quantities de?ned on {Si?t }i?V . We do so in two
ways, either irrespective of how small each pi?t may be (this section) or depending on suitable lower
bounds on the probabilities pi?t (Section 4). In fact, forcing lower bounds on pi?t is equivalent to
adding exploration terms to the algorithm, which can be done only when knowing {Si?t }i?V before
each prediction ?an information available only in the informed setting.
5
An obvious upper bound on Qt is K.
4
The following result is the building block for all subsequent results in the uninformed setting.6
Theorem 1 The regret of Exp3-SET satis?es
T
? ln K
?
? ?
+
?[Qt ] .
max ? LA?T ? Lk?T ?
k?V
?
2 t=1
As we said, in the adversarial and symmetric case the observation system at time t can be described
by an undirected graph Gt = (V? Et ). This is essentially the problem of [14], which they studied
in the easier informed setting, where the same quantity Qt above arises in the analysis of their
ELP algorithm. In their Lemma 3, they show that Qt ? ?(Gt ), irrespective of the choice of the
probabilities pt . When applied to Exp3-SET, this immediately gives the following result.
Corollary 2 In the symmetric setting, the regret of Exp3-SET satis?es
T
?
? ln K
? ?
+
?[?(Gt )] .
max ? LA?T ? Lk?T ?
k?V
?
2 t=1
In particular, if for constants ?1 ? . . . ? ?T we have ?(Gt ) ? ?t , t = 1? . . . ? T , then setting ? =
?
? ?T
(2 ln K)
?
t=1 ?t , gives
?
T
?
? ?
?
?t .
max ? LA?T ? Lk?T ? ?2(ln K)
k?V
t=1
The bounds proven in Corollary 2 are equivalent to those proven in [14] (Theorem 2 therein) for
the ELP algorithm. Yet, our analysis is much simpler and, more importantly, our algorithm is simpler and more ef?cient than ELP, which requires solving a linear program at each step. Moreover,
unlike ELP, Exp-SET does not require prior knowledge of the observation system {Si?t }i?V at the
beginning of each step.
We now turn to the directed setting. We start by considering a setting in which the observation
system is stochastically generated. Then, we turn to the harder adversarial setting.
The Erd?os-Renyi model is a standard model for random directed graphs G = (V? D), where we are
given a density parameter r ? [0? 1] and, for any pair i? j ? V , arc (i? j) ? D with independent
probability r.7 We have the following result.
Corollary 3 Let Gt be generated according to the Erd?os-Renyi model with parameter r ? [0? 1].
Then the regret of Exp3-SET satis?es
?
? ln K
?
?T ?
+
max ? LA?T ? Lk?T ?
1 ? (1 ? r)K .
k?V
?
2r
In the above, the expectations ?[?] are w.r.t. both the algorithm?s randomization
and the random
?
2r ln K
generation of Gt occurring at each round. In particular, setting ? = T ?1??1?r)
K ) , gives
?
?
?
2(ln K)T (1 ? (1 ? r)K )
.
max ? LA?T ? Lk?T ?
k?V
r
Note that as r ranges in [0? 1] we interpolate between the bandit (r = 0)8 and the expert (r = 1)
regret bounds.
When the observation system is generated by an adversary, we have the following result.
Corollary 4 In the directed setting, the regret of Exp3-SET satis?es
6
7
8
T
?
? ln K
? ?
?[mas(Gt )] .
+
max ? LA?T ? Lk?T ?
k?V
?
2 t=1
All proofs are given in the supplementary material to this paper.
Self loops, i.e., arcs ?i? i) are included by default here.
K
Observe that limr?0+ 1??1?r)
= K.
r
5
In particular,
if for constants m1 ? . . . ? mT we have mas(Gt ) ? mt , t = 1? . . . ? T , then setting
?
? ?T
? = (2 ln K)
t=1 mt , gives
?
?
T
?
?
? ?
?L
m .
max ? L
? ?2(ln K)
k?V
A?T
k?T
t
t=1
Observe that Corollary 4 is a strict generalization of Corollary 2 because, as we pointed out in
Section 2, mas(Gt ) ? ?(Gt ), with equality holding when Gt is an undirected graph.
As far as lower
?? bounds? are concerned, in the symmetric setting, the authors of [14] derive a lower
bound of ? ?(G)T in the case when Gt = G for all t. We remark that similar to the symmetric
?
??
setting, we can derive a lower bound of ? ?(G)T . The simple observation is that given a
directed graph G, we can de?ne a new graph G? which is made undirected just by reciprocating arcs;
namely, if there is an arc (i? j) in G we add arcs (i? j) and (j? i) in G? . Note that ?(G) = ?(G? ).
Since in G? the learner can only receive more information than in G, any lower bound on G also
applies to G? . Therefore we derive the following corollary to the lower bound of [14] (Theorem 4
therein).
Corollary 5 Fix a directed graph G, and suppose
? Gt =? G for all t. Then there exists a ?randomized)
adversarial strategy such that for any T = ? ?(G)3 and for any learning strategy, the expected
?
??
regret of the learner is ? ?(G)T .
Moreover, standard results in the theory of Erd?os-Renyi graphs, at least in the symmetric case (e.g.,
[11]), show that, when the density parameter r is constant, the independence number of the resulting
graph has an inverse dependence on r. This
? fact, combined with the abovementioned lower bound
of [14] gives a lower bound of the form Tr , matching (up to logarithmic factors) the upper bound
of Corollary 3.
One may wonder whether a sharper lower bound argument exists which applies to the general directed adversarial setting and involves the larger quantity mas(G). Unfortunately, the above measure does not seem to be related to the optimal regret: Using Claim 1 in the appendix (see proof of
Theorem 3) one can exhibit a sequence of graphs each having a large acyclic subgraph, on which
the regret of Exp3-SET is still small.
The lack of a lower bound matching the upper bound provided by Corollary 4 is a good indication
that something more sophisticated has to be done in order to upper bound Qt in (1). This leads us
to consider more re?ned ways of allocating probabilities pi?t to nodes. In the next section, we show
an allocation strategy that delivers optimal (to within logarithmic factors) regret bounds using prior
knowledge of the graphs Gt .
4
Algorithms with Explicit Exploration: The Informed Setting
We are still in the general scenario where graphs Gt are adversarially generated and directed, but
now Gt is made available before prediction. We start by showing a simple example where our
analysis of Exp3-SET inherently fails. This is due to the fact that, when the graph induced by the
observation system is directed, the key quantity Qt de?ned in (1) cannot be nonvacuously upper
bounded independent of the choice of probabilities pi?t . A way around it is to introduce a new
algorithm, called Exp3-DOM, which controls probabilities pi?t by adding an exploration term to the
distribution pt . This exploration term is supported on a dominating set of the current graph Gt . For
this reason, Exp3-DOM requires prior access to a dominating set Rt at each time step t which, in
turn, requires prior knowledge of the entire observation system {Si?t }i?V .
As announced, the next result shows that, even for simple directed graphs, there exist distributions
pt on the vertices such that Qt is linear in the number of nodes while the independence number is
1.9 Hence, nontrivial bounds on Qt can be found only by imposing conditions on distribution pt .
9
In this speci?c example, the maximum acyclic subgraph has size K, which con?rms the looseness of
Corollary 4.
6
Algorithm 2: Exp3-DOM algorithm (for the uninformed setting)
?
?
Input: Exploration parameters ? ?b) ? (0? 1] for b ? 0? 1? . . . ? ?log2 K?
?
?
?b)
Initialization: wi?1 = 1 for all i ? V and b ? 0? 1? . . . ? ?log2 K?
For t = 1? 2? . . . :
1. Observation system {Si?t }i?V is generated and disclosed ;
2. Compute a dominating set Rt ? V for Gt associated with {Si?t }i?V ;
?
?
3. Let bt be such that |Rt | ? 2bt ? 2bt +1 ? 1 ;
?
?b )
?b )
4. Set Wt t = i?V wi?tt ;
?b )
?
? wi?tt
? ?bt )
?b )
I{i ? Rt };
+
5. Set pi?tt = 1 ? ? ?bt )
?bt )
|Rt |
Wt
?bt )
6. Play action It drawn according to distribution pt
? ?b )
?b ) ?
= p1?tt ? . . . ? pV?tt ;
7. Observe pairs (i? ?i?t ) for all i ? SIt ?t ;
?
?
?bt )
?b )
?b )
8. For any i ? V set wi?t+1
= wi?tt exp ?? ?bt ) ??i?tt /2bt , where
?i?t
?b )
??i?tt = ?b ) I{i ? SIt ?t }
qi?tt
?b )
qi?tt =
and
?
?b )
pj?tt .
t
j : j?
?i
Fact 6 Let G = (V? D) be a total order on V = {1? . . . ? K}, i.e., such that for all i ? V , arc
(j? i) ? D for all j = i + 1? . . . ? K. Let p = (p1 ? . . . ? pK ) be a distribution on V such that pi = 2?i ,
for i < K and pk = 2?K+1 . Then
Q=
K
?
i=1
pi +
p
?i
j : j?
?i
pj
=
K
?
i=1
pi
?K
j=i
pj
=
K +1
.
2
We are now ready to introduce and analyze the new algorithm Exp3-DOM for the informed and
directed setting. Exp3-DOM (see Algorithm 2) runs ?(log K) variants of Exp3 indexed by b =
0? 1? . . . ? ?log2 K?. At time t the algorithm is given observation system {Si?t }i?V , and computes
a dominating set Rt of the directed graph Gt induced by {Si?t }i?V . Based on the size |Rt | of Rt ,
the algorithm uses instance bt = ?log2 |Rt |? to pick action It . We use a superscript b to denote the
quantities relevant to the variant of Exp3 indexed by b. Similarly to the analysis of Exp3-SET, the
key quantities are
?b)
qi?t
=
?
j : i?Sj?t
?
?b)
pj?t
=
?
?b)
pj?t
and
?b)
Qt
=
? p?b)
i?t
?b)
i?V
t
j : j?
?i
qi?t
?
b = 0? 1? . . . ? ?log2 K? .
?
Let T ?b) = t = 1? . . . ? T : |Rt | ? [2b ? 2b+1 ? 1] . Clearly, the sets T ?b) are a partition of the time
?
steps {1? . . . ? T }, so that b |T ?b) | = T . Since the adversary adaptively chooses the dominating
sets Rt , the sets T ?b) are random. This causes a problem in tuning the parameters ? ?b) . For this
reason, we do not prove a regret bound for Exp3-DOM, where each instance uses a ?xed ? ?b) , but
for a slight variant (described in the proof of Theorem 7 ?see the appendix) where each ? ?b) is set
through a doubling trick.
Theorem 7 In the directed case, the regret of Exp3-DOM satis?es
?
?
?
???
?log2 K?
?b)
b
?
?
?
?
Qt
?? .
? 2 ln K + ? ?b) ? ?
1 + b+1
max ? LA?T ? Lk?T ?
?b)
k?V
2
?
?b)
b=0
t?T
7
(2)
Moreover, if we use a doubling trick to choose ? ?b) for each b = 0? . . . ? ?log2 K?, then
?
?
?
??
? T ?
?
??
?
?
?b )
max ? LA?T ? Lk?T = ? ?(ln K) ? ??
4|Rt | + Qt t ? + (ln K) ln(KT )? .
k?V
(3)
t=1
Importantly, the next result shows how bound (3) of Theorem 7 can be expressed in terms of the sequence ?(Gt ) of independence numbers of graphs Gt whenever the Greedy Set Cover algorithm [9]
(see Section 2) is used to compute the dominating set Rt of the observation system at time t.
Corollary 8 If Step 2 of Exp3-DOM uses the Greedy Set Cover algorithm to compute the dominating
sets Rt , then the regret of Exp-DOM with doubling trick satis?es
?
?
?
?
T
?
?
?
?
?(Gt ) + ln(K) ln(KT )?
max ? LA?T ? Lk?T = ? ?ln(K)?ln(KT )
k?V
t=1
where, for each t, ?(Gt ) is the independence number of the graph Gt induced by observation system
{Si?t }i?V .
Comparing Corollary 8 to Corollary 5 delivers the announced characherization in the general adversarial and directed setting. Moreover, a quick comparison between Corollary 2 and Corollary 8
reveals that a symmetric observation system overcomes the advantage of working in an informed
setting: The bound we obtained for the uninformed symmetric setting (Corollary 2) is sharper by
logarithmic factors than the one we derived for the informed ?but more general, i.e., directed?
setting (Corollary 8).
5
Conclusions and work in progress
We have investigated online prediction problems in partial information regimes that interpolate between the classical bandit and expert settings. We have shown a number of results characterizing
prediction performance in terms of: the structure of the observation system, the amount of information available before prediction, the nature (adversarial or fully random) of the process generating
the observation system. Our results are substantial improvements over the paper [14] that initiated this interesting line of research. Our improvements are diverse, and range from considering
both informed and uninformed settings to delivering more re?ned graph-theoretic characterizations,
from providing more ef?cient algorithmic solutions to relying on simpler (and often more general)
analytical tools.
Some research directions we are currently pursuing are the following: (1) We are currently investigating the extent to which our results could be applied to the case when the observation system
{Si?t }i?V may depend on the loss ?It ?t of player?s action It . Note that this would prevent a direct construction of an unbiased estimator for unobserved losses, which many worst-case bandit
algorithms (including ours ?see the appendix) hinge upon. (2) The upper bound contained in
Corollary 4 and expressed in terms of mas(?) is almost certainly suboptimal, even in the uninformed
setting, and we are trying to see if more adequate graph complexity measures can be used instead.
(3) Our lower bound in Corollary 5 heavily relies on the corresponding lower bound in [14] which,
in turn, refers to a constant graph sequence. We would like to provide a more complete charecterization applying to sequences of adversarially-generated graphs G1 ? G2 ? . . . ? GT in terms of sequences
of their corresponding independence numbers ?(G1 )? ?(G2 )? . . . ? ?(GT ) (or variants thereof), in
both the uninformed and the informed settings. (4) All our upper bounds rely on parameters to be
tuned as a function of sequences of observation system quantities (e.g., the sequence of independence numbers). We are trying to see if an adaptive learning rate strategy a` la [4], based on the
observable quantities Qt , could give similar results without such a prior knowledge.
Acknowledgments
NA was supported in part by an ERC advanced grant, by a USA-Israeli BSF grant, and by the Israeli
I-CORE program. NCB acknowledges partial support by MIUR (project ARS TechnoMedia, PRIN
2010-2011, grant no. 2010N5K7EB 003). YM was supported in part by a grant from the Israel
Science Foundation, a grant from the United States-Israel Binational Science Foundation (BSF), a
grant by Israel Ministry of Science and Technology and the Israeli Centers of Research Excellence
(I-CORE) program (Center No. 4/11).
8
References
[1] N. Alon and J. H. Spencer. The probabilistic method. John Wiley ? Sons, 2004.
[2] Jean-Yves Audibert and S?ebastien Bubeck. Minimax policies for adversarial and stochastic
bandits. In COLT, 2009.
[3] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic
multiarmed bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[4] Peter Auer, Nicol`o Cesa-Bianchi, and Claudio Gentile. Adaptive and self-con?dent on-line
learning algorithms. J. Comput. Syst. Sci., 64(1):48?75, 2002.
[5] Y. Caro. New results on the independence number. In Tech. Report, Tel-Aviv University, 1979.
[6] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth.
How to use expert advice. J. ACM, 44(3):427?485, 1997.
[7] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[8] Nicol`o Cesa-Bianchi and G?abor Lugosi.
78(5):1404?1422, 2012.
Combinatorial bandits.
J. Comput. Syst. Sci.,
[9] V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of Operations
Research, 4(3):233?235, 1979.
[10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. In Euro-COLT, pages 23?37. Springer-Verlag, 1995. Also,
JCSS 55(1): 119-139 (1997).
[11] A. M. Frieze. On the independence number of random graphs. Discrete Mathematics, 81:171?
175, 1990.
[12] A. Kalai and S. Vempala. Ef?cient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291?307, 2005.
[13] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and
Computation, 108:212?261, 1994.
[14] S. Mannor and O. Shamir. From bandits to experts: On the value of side-observations. In 25th
Annual Conference on Neural Information Processing Systems ?NIPS 2011), 2011.
[15] Alan Said, Ernesto W De Luca, and Sahin Albayrak. How social relationships affect user
similarities. In Proceedings of the International Conference on Intelligent User Interfaces
Workshop on Social Recommender Systems, Hong Kong, 2010.
[16] V. G. Vovk. Aggregating strategies. In COLT, pages 371?386, 1990.
[17] V. K. Wey. A lower bound on the stability number of a simple graph. In Bell Lab. Tech. Memo
No. 81-11217-9, 1981.
9
| 4908 |@word kong:1 nd:1 open:1 pick:2 incurs:2 tr:1 harder:2 uncovered:1 selecting:2 united:1 tuned:1 ours:1 past:4 current:4 comparing:1 si:27 yet:2 follower:4 must:2 john:1 subsequent:1 partition:1 shape:1 greedy:5 selected:1 warmuth:2 beginning:2 core:2 manfred:1 characterization:4 mannor:3 node:5 boosting:1 preference:2 simpler:4 along:1 direct:1 prove:2 introduce:3 manner:1 excellence:1 expected:1 p1:3 multi:2 buying:2 relying:1 armed:2 considering:2 provided:1 project:1 bounded:3 notation:1 moreover:4 fuel:1 israel:5 xed:3 emerging:1 informed:9 whatsoever:1 unobserved:1 transformation:1 every:2 universit:1 control:2 grant:6 before:10 aggregating:1 initiated:1 approximately:1 lugosi:2 might:3 therein:2 studied:4 initialization:1 range:2 directed:38 acknowledgment:1 regret:29 block:1 nite:1 bell:1 convenient:1 matching:2 word:1 refers:1 get:1 cannot:1 selection:1 context:1 applying:1 equivalent:4 quick:1 center:2 limr:1 assigns:1 immediately:1 helmbold:1 subgraphs:1 estimator:1 bsf:2 haussler:1 importantly:2 insubria:1 his:1 stability:1 notion:3 yishay:1 shamir:3 construction:2 user:8 suppose:2 play:2 pt:7 us:4 heavily:1 associate:1 element:1 trick:3 asymmetric:1 observed:4 role:1 jcss:1 worst:1 wj:1 cycle:3 connected:1 ordering:1 observes:4 substantial:1 accessing:2 complexity:1 reward:1 dom:11 depend:2 tight:1 solving:2 upon:1 learner:5 represented:1 lift:1 choosing:2 jean:1 heuristic:1 supplementary:1 dominating:18 solve:1 say:1 drawing:1 otherwise:1 larger:1 ability:1 gi:2 g1:2 itself:4 superscript:1 online:5 sequence:10 indication:1 advantage:1 analytical:1 albayrak:1 product:7 maximal:2 relevant:1 loop:2 subgraph:4 mixing:1 achieve:1 intuitive:1 rst:2 empty:1 generating:1 liked:2 depending:1 tale:1 alon:2 ac:2 derive:3 uninformed:10 measured:1 qt:14 progress:1 involves:1 implies:1 indicate:1 direction:2 ning:1 stochastic:2 exploration:6 milano:1 routing:1 material:1 require:1 fix:1 generalization:2 preliminary:1 randomization:3 nonoblivious:1 spencer:1 dent:1 around:2 considered:1 exp:4 algorithmic:1 claim:1 driving:1 major:1 achieves:3 adopt:1 smallest:1 applicable:1 combinatorial:3 currently:2 largest:2 uninsubria:1 vice:2 city:2 tool:1 weighted:1 clearly:1 kalai:1 claudio:3 corollary:20 derived:1 improvement:2 tech:2 adversarial:10 entire:1 bt:11 abor:1 her:1 bandit:19 interested:1 i1:2 selects:1 orientation:2 colt:3 denoted:2 special:2 initialize:1 equal:1 having:4 ernesto:1 adversarially:3 outdegree:1 others:1 np:1 report:1 intelligent:1 randomly:1 frieze:1 interpolate:2 nogaa:1 interest:4 satis:6 possibility:1 certainly:1 allocating:1 kt:3 edge:8 partial:7 indexed:2 divide:1 littlestone:1 desired:1 re:2 ncb:1 minimal:2 instance:4 cover:6 ar:1 yoav:2 cost:1 vertex:1 subset:5 uniform:1 wonder:1 connect:1 perturbed:1 chooses:2 adaptively:1 combined:1 person:3 international:1 density:2 randomized:1 siam:1 accessible:1 probabilistic:1 receiving:1 connecting:2 ym:1 na:1 cesa:7 containing:1 choose:1 possibly:1 worse:1 stochastically:1 expert:11 return:1 syst:2 de:9 includes:1 notable:1 audibert:1 depends:1 view:2 lab:1 analyze:1 start:2 il:2 yves:1 who:1 whenever:1 intentionally:1 thereof:3 obvious:1 associated:2 di:1 proof:3 con:2 proved:1 knowledge:5 sophisticated:1 auer:2 dt:3 follow:2 erd:3 done:3 though:2 just:3 working:1 o:3 celebrity:2 lack:1 elp:9 reveal:1 aviv:3 usa:1 building:1 unbiased:1 hence:4 assigned:1 equality:1 symmetric:11 round:8 game:6 self:3 covering:1 hong:1 trying:2 complete:3 theoretic:4 tt:11 delivers:2 interface:1 ef:5 recently:1 console:2 mt:3 binational:1 caro:1 interpretation:1 m1:1 slight:1 reciprocating:1 multiarmed:1 versa:2 imposing:1 cambridge:1 tuning:1 mathematics:2 similarly:2 pointed:1 erc:1 access:1 similarity:2 operating:1 gt:34 nicolo:1 add:1 something:1 own:1 italy:2 inf:1 forcing:1 scenario:1 route:8 certain:4 verlag:1 retailer:1 minimum:4 gentile:3 ministry:1 speci:4 full:4 alan:1 exp3:34 characterized:2 luca:1 qi:10 prediction:16 variant:11 essentially:2 expectation:2 achieved:2 receive:1 addition:1 whereas:2 noga:1 unlike:2 probably:1 strict:1 induced:3 elegant:1 undirected:15 seem:1 call:2 intermediate:1 revealed:1 superset:1 concerned:1 chvatal:1 variety:1 independence:17 affect:1 nonstochastic:1 suboptimal:1 observability:26 regarding:1 knowing:1 whether:1 rms:1 peter:2 interpolates:1 cause:1 action:41 repeatedly:1 remark:1 ignored:1 adequate:1 delivering:1 obt:1 amount:1 schapire:3 exist:1 serving:1 diverse:1 write:1 discrete:1 group:1 key:5 terminology:1 drawn:2 prevent:1 pj:8 graph:70 sum:2 run:4 inverse:1 arrive:1 throughout:1 family:1 almost:1 pursuing:1 decision:2 appendix:4 announced:2 bound:37 def:2 played:3 distinguish:1 annual:1 nontrivial:1 ahead:1 precisely:2 prin:1 aspect:1 argument:1 vempala:1 ned:6 tv:1 according:5 slightly:1 son:1 wi:10 cable:2 making:1 taken:1 computationally:1 ln:19 turn:4 know:2 end:1 studying:1 available:5 operation:1 observe:6 running:1 remaining:1 log2:7 hinge:1 cally:1 classical:1 quantity:10 strategy:5 indegree:1 rt:18 dependence:1 abovementioned:1 said:2 exhibit:1 lends:1 link:1 sci:2 majority:1 consumption:1 extent:1 reason:2 studi:1 length:2 relationship:2 providing:1 equivalently:1 unfortunately:1 robert:2 sharper:2 holding:1 stated:1 memo:1 ebastien:1 proper:2 policy:1 looseness:1 bianchi:7 upper:11 recommender:1 observation:33 arc:14 miur:1 mansour:2 arbitrary:3 introduced:2 pair:7 namely:2 nick:1 nip:1 israeli:3 adversary:6 regime:1 program:5 max:11 tau:2 video:1 including:1 event:1 demanding:1 suitable:1 rely:1 advanced:1 representing:1 minimax:1 technology:1 ne:3 lk:12 created:1 irrespective:2 ready:1 acknowledges:1 prior:6 taste:1 nicol:4 freund:3 loss:26 fully:1 generation:1 interesting:1 allocation:1 proven:3 acyclic:6 foundation:2 incurred:1 viewpoint:1 playing:8 pi:14 supported:3 side:3 weaker:1 characterizing:1 default:1 world:1 cumulative:1 computes:2 author:1 made:3 adaptive:2 far:2 social:5 excess:1 sj:3 obtains:1 observable:1 preferred:1 overcomes:1 clique:1 reveals:8 buy:3 investigating:1 recommending:1 leader:1 degli:1 nature:1 inherently:1 ignoring:2 tel:3 symmetry:1 investigated:1 protocol:1 pk:3 main:3 privately:1 arise:3 repeated:1 allowed:1 advice:2 cient:5 euro:1 hosted:1 wiley:1 sub:3 fails:1 pv:1 explicit:2 deterministically:1 comput:2 renyi:3 theorem:7 showing:2 concern:1 sit:6 exists:3 disclosed:3 workshop:1 sequential:2 adding:2 conditioned:1 occurring:1 easier:1 logarithmic:6 simply:2 explore:1 likely:2 bubeck:1 expressed:3 ordered:1 contained:1 g2:2 joined:1 doubling:3 recommendation:3 collectively:1 applies:2 springer:1 relies:1 acm:1 hedge:2 ma:9 goal:1 formulated:1 viewed:2 content:1 change:2 hard:1 included:1 operates:1 unimi:1 wt:3 vovk:1 lemma:2 called:8 total:1 e:6 la:12 player:15 domination:2 select:1 formally:1 internal:1 support:1 arises:1 |
4,319 | 4,909 | Eluder Dimension and the Sample Complexity of
Optimistic Exploration
Benjamin Van Roy
Stanford University
Stanford, CA 94305
[email protected]
Daniel Russo
Stanford University
Stanford, CA 94305
[email protected]
Abstract
This paper considers the sample complexity of the multi-armed bandit with dependencies among the arms. Some of the most successful algorithms for this problem
use the principle of optimism in the face of uncertainty to guide exploration. The
clearest example of this is the class of upper confidence bound (UCB) algorithms,
but recent work has shown that a simple posterior sampling algorithm, sometimes
called Thompson sampling, can be analyzed in the same manner as optimistic approaches. In this paper, we develop a regret bound that holds for both classes of
algorithms. This bound applies broadly and can be specialized to many model
classes. It depends on a new notion we refer to as the eluder dimension, which
measures the degree of dependence among action rewards. Compared to UCB
algorithm regret bounds for specific model classes, our general bound matches the
best available for linear models and is stronger than the best available for generalized linear models.
1
Introduction
Consider a politician trying to elude a group of reporters. She hopes to keep her true position hidden
from the reporters, but each piece of information she provides must be new, in the sense that it?s not
a clear consequence of what she has already told them. How long can she continue before her true
position is pinned down? This is the essence of what we call the eluder dimension. We show this
notion controls the rate at which algorithms using optimistic exploration converge to optimality.
We consider an optimization problem faced by an agent who is uncertain about how her actions
influence performance. The agent selects actions sequentially, and upon each action observes a
reward. A reward function governs the mean reward of each action. As rewards are observed the
agent learns about the reward function, and this allows her to improve behavior. Good performance
requires adaptively sampling actions in a way that strikes an effective balance between exploring
poorly understood actions and exploiting previously acquired knowledge to attain high rewards.
Unless the agent has prior knowledge of the structure of the mean payoff function, she can only learn
to attain near optimal performance by exhaustively sampling each possible action. In this paper, we
focus on problems where there is a known relationship among the rewards generated by different
actions, potentially allowing the agent to learn without exploring every action. Problems of this form
are often referred to as multi-armed bandit (MAB) problems with dependent arms.
A notable example is the ?linear bandit? problem, where actions are described by a finite number
of features and the reward function is linear in these features. Several researchers have studied
algorithms for such problems and established theoretical guarantees that have no dependence on the
number of actions [1, 2, 3]. Instead, their bounds depend on the linear dimension of the class of
reward functions. In this paper, we assume that the reward function lies in a known but otherwise
arbitrary class of uniformly bounded real-valued functions, and provide theoretical guarantees that
1
depend on more general measures of the complexity of the class of functions. Our analysis of
this abstract framework yields a result that applies broadly, beyond the scope of specific problems
that have been studied in the literature, and also identifies fundamental insights that unify more
specialized prior results.
The guarantees we provide apply to two popular classes of algorithms for the stochastic MAB:
upper confidence bound (UCB) algorithms and Thompson sampling. Each algorithm is described
in Section 3. The aforementioned papers on the linear bandit problem study UCB algorithms [1,
2, 3]. Other authors have studied UCB algorithms in cases where the reward function is Lipschitz
continuous [4, 5], sampled from a Gaussian process [6], or takes the form of a generalized [7] or
sparse [8] linear model. More generally, there is an immense literature on this approach to balancing
between exploration and exploitation, including work on bandits with independent arms [9, 10, 11,
12], reinforcement learning [13, 14], and Monte Carlo Tree Search [15].
Recently, a simple posterior sampling algorithm called Thompson sampling was shown to share a
close connection with UCB algorithms [16]. This connection enables us to study both types of
algorithms in a unified manner. Though it was first proposed in 1933 [17], Thompson sampling
has until recently received relatively little attention. Interest in the algorithm grew after empirical
studies [18, 19] demonstrated performance exceeding state-of the-art methods. Strong theoretical
guarantees are now available for an important class of problems with independent arms [20, 21, 22].
A recent paper considers the application of this algorithm to a linear contextual bandit problem [23].
To our knowledge, few other papers have studied MAB problems in a general framework like the
one we consider. There is work that provides general bounds for contextual bandit problems where
the context space is allowed to be infinite, but the action space is small (see e.g., [24]). Our model
captures contextual bandits as a special case, but we emphasize problem instances with large or
infinite action sets, and where the goal is to learn without sampling every possible action. The closest
related work to ours is that of Amin et al. [25], who consider the problem of learning the optimum
of a function that lies in a known, but otherwise arbitrary set of functions. They provide bounds
based on a new notion of dimension, but unfortunately this notion does not provide a guarantee for
the algorithms we consider.
We provide bounds on expected regret over a time horizon T that are, up to a logarithmic factor, of
order
v
u
udimE F, T ?2 log N F, T ?2 , k?k? T .
t|
{z
}|
{z
}
Eluder dimension
log?covering number
This quantity depends on the class of reward functions F through two measures of complexity. Each
captures the approximate structure of the class of functions at a scale T ?2 that depends on the time
horizon. The first measures the growth rate of the covering numbers of F, and is closely related to
measures of complexity that are common in the supervised learning literature. This quantity roughly
captures the sensitivity of F to statistical over-fitting. The second measure, the eluder dimension,
is a new notion we introduce. This captures how effectively the value of unobserved actions can be
inferred from observed samples. We highlight in Section 4.1 why notions of dimension common to
the supervised learning literature are insufficient for our purposes. Finally, we show that our more
general result when specialized to linear models recovers the strongest known regret bound and in
the case of generalized linear models yields a bound stronger than that established in prior literature.
2
Problem Formulation
We consider a model involving a set of actions A and a set of real-valued functions F =
{f? : A 7? R| ? ? ?}, indexed by a parameter that takes values from an index set ?. We will
define random variables with respect to a probability space (?, F, P). A random variable ? indexes
the true reward function f? . At each time t, the agent is presented with a possibly random subset
At ? A and selects an action At ? At , after which she observes a reward Rt .
We denote by Ht the history (A1 , A1 , R1 , . . . , At?1 , At?1 , Rt?1 , At ) of observations available to
the agent when choosing an action At . The agent employs a policy ? = {?t |t ? N}, which is a
deterministic sequence of functions, each mapping the history Ht to a probability distribution over
actions A. For each realization of Ht , ?t (Ht ) is a distribution over A with support At . The action At
2
is selected by sampling from the distribution ?t (?), so that P(At ? ?|Ht ) = ?t (Ht ). We assume that
E[Rt |Ht , ?, At ] = f? (At ). In other words, the realized reward is the mean-reward value corrupted
by zero-mean noise. We will also assume that for each f ? F and t ? N, arg maxa?At f (a) is
nonempty with probability one, though algorithms and results can be generalized to handle cases
where this assumption does not hold. We fix constants C > 0 and ? > 0 and impose two further
simplifying assumptions. The first concerns boundedness of reward functions.
Assumption 1. For all f ? F and a ? A, f (a) ? [0, C].
Our second assumption ensures that observation noise is light-tailed. We say a random variable X
is ?-sub-Gaussian if E[exp(?X)] ? exp(?2 ? 2 /2) almost surely for all ?.
Assumption 2. For all t ? N, Rt ? f? (At ) conditioned on (Ht , ?, At ) is ?-sub-Gaussian.
We let A?t ? arg maxa?At f? (a) denote the optimal action at time t. The T period regret is the
random variable
T
X
R(T, ?) =
[f? (A?t ) ? f? (At )] ,
t=1
where the actions {At : t ? N} are selected according to ?. We sometimes study expected regret
E[R(T, ?)], where the expectation is taken over the prior distribution of ?, the reward noise, and
the algorithm?s internal randomization. This quantity is sometimes called Bayes risk or Bayesian
regret. Similarly, we study conditional expected regret E [R(T, ?) | ?], which integrates over all
randomness in the system except for ?.
Example 1. Contextual Models. The contextual multi-armed bandit model is a special case of
the formulation presented above. In such a model, an exogenous Markov process Xt taking values
in a set X influences rewards. In particular, the expected reward at time t is given by f? (a, Xt ).
However, this is mathematically equivalent to a problem with stochastic time-varying decision
sets At . In particular, one can define the set of actions to be the set of state-action pairs A :=
{(x, a) : x ? A, a ? A(x)}, and the set of available actions to be At = {(Xt , a) : a ? A(Xt )}.
3
Algorithms
We will establish performance bounds for two classes of algorithms: Thompson sampling and UCB
algorithms. As background, we discuss the algorithms in this section. We also provide an example
of each type of algorithm that is designed to address the ?linear bandit? problem.
UCB Algorithms: UCB algorithms have received a great deal of attention in the MAB literature.
Here we describe a very broad class of UCB algorithms. We say that a confidence set is a random
subset Ft ? F that is measurable with respect to ?(Ht ). Typically, Ft is constructed so that
it contains f? with high probability. We denote by ? F1:? a UCB algorithm that makes use of a
sequence of confidence sets {Ft : t ? N}. At each time t, such an algorithm selects the action
At ? arg max sup f (a),
a?At
f ?Ft
where sup f (a) is an optimistic estimate of f? (a) representing the greatest value that is statistically
f ?Ft
plausible at time t. Optimism encourages selection of poorly-understood actions, which leads to
informative observations. As data accumulates, optimistic estimates are adapted, and this process of
exploration and learning converges toward optimal behavior.
In this paper, we will assume for simplicity that the maximum defining At is attained. Results can be
generalized to handle cases when this technical condition does not hold. Unfortunately, for natural
choices of Ft , it may be exceptionally difficult to solve for such an action. Thankfully, all results in
this paper also apply to a posterior sampling algorithm that avoids this hard optimization problem.
Thompson sampling: The Thompson sampling algorithm simply samples each action according
to the probability it is optimal. In particular, the algorithm applies action sampling distributions
?tTS (Ht ) = P (A?t ? ? | Ht ), where A?t is a random variable that satisfies A?t ? arg maxa?At f? (a).
Practical implementations typically operate by at each time t sampling an index ??t ? ? from the
distribution P (? ? ? | Ht ) and then generating an action At ? arg maxa?At f??t (a).
3
Algorithm 1 Linear UCB
1: Initialize: Select d linearly independent actions
2: Update Statistics:
??t ? OLS
estimate of ?
Pt?1
?t ? k=1 ?(A?k )?(A?k )T
?
?
?t ? ? :
? ? ?t
? ? d log t
Algorithm 2
Linear Thompson sampling
1: Sample Model:
??t ? N (?t , ?t )
2: Select Action:
At ? arg maxa?A h?(a), ??t i
3: Update Statistics:
?t+1 ? E[?|Ht+1 ]
?t+1 ? E[(? ? ?t+1 )(? ? ?t+1 )> |Ht+1 ]
4: Increment t and Goto Step 1
?t
3: Select Action:
At ? arg maxa?A {max???t h?(a), ?i}
4: Increment t and Goto Step 2
Algorithms for Linear Bandits: Here we provide an example of a Thompson sampling and a
UCB algorithm, each of which addresses a problem in which the reward function is linear in a ddimensional vector ?. In particular, there is a known feature mapping ? : A ? Rd such that an
action a yields expected reward f? (a) = h?(a), ?i. Algorithm 1 is a variation of one proposed by
Rusmevichientong and Tsitsiklis [3] to address such problems. Given past observations, the algorithm constructs a confidence ellipsoid ?t centered around a Dleast squares
??t and employs
E estimate
q
?
the upper confidence bound Ut (a) := max???t ?(a), ? = ?(a), ?t + ? d log(t) k?(a)k??1 .
t
The term k?(a)k??1 captures the amount of previous exploration in the direction ?(a), and causes
t
q
the ?uncertainty bonus? ? d log(t) k?(a)k??1 to diminish as the number of observations increases.
t
Now, consider Algorithm 2. Here we assume ? is drawn from a normal distribution N (?1 , ?1 ). We
consider a linear reward function f? (a) = h?(a), ?i and assume the reward noise Rt ? f? (At ) is
normally distributed and independent from (Ht , At , ?). It is easy to show that, conditioned on the
history Ht , ? remains normally distributed. Algorithm 2 presents an implementation of Thompson
sampling for this problem. The expectations can be computed efficiently via Kalman filtering.
4
Notions of Dimension
Recently, there has been a great deal of interest in the development of regret bounds for linear UCB
algorithms [1, 2, 3, 26]. These papers show that for a broad
class of problems, a variant ??? of
?
?
?
T ) and E [R(T, ? ? ) | ?] = O(d
T ).
Algorithm 1 satisfies the upper bounds E [R(T, ? ? )] = O(d
An interesting feature of these bounds is that they have no dependence on the number actions in A,
and instead depend only on the linear dimension of the set of functions F. Our goal is to provide
bounds that depend on more general measures of the complexity of the class of functions. This
section introduces a new notion, the eluder dimension, on which our bounds will depend. First,
we highlight why common notions from statistical learning theory do not suffice when it comes to
multi?armed bandit problems.
4.1
Vapnik-Chervonenkis Dimension
We begin with an example that illustrates how a class of functions that is learnable in constant time
in a supervised learning context may require an arbitrarily long duration when learning to optimize.
Example 2. Consider a finite class of binary-valued functions F
=
{f? : A 7? {0, 1} | ? ? {1, . . . , n}} over a finite action set A = {1, . . . , n}. Let f? (a) = 1(? = a),
so that each function is an indicator for an action. To keep things simple, assume that Rt = f? (At ),
so that there is no noise. If ? is uniformly distributed over {1, . . . , n}, it is easy to see that the regret
of any algorithm grows linearly with n. For large n, until ? is discovered, each sampled action is
unlikely to reveal much about ? and learning therefore takes very long.
Consider the closely related supervised learning problem in which at each time an action A?t is
sampled uniformly from A and the mean?reward value f? (A?t ) is observed. For large n, the time it
4
takes to effectively learn to predict f? (A?t ) given A?t does not depend on t. In particular, prediction
error converges to 1/n in constant time. Note that predicting 0 at every time already achieves this
low level of error.
In the preceding example, the Vapnik-Chervonenkis (VC) dimension, which characterizes the sample complexity of supervised learning, is 1. On the other hand, the eluder-dimension, which will
we define below, is n. To highlight conceptual differences between the eluder dimension and the
VC dimension, we will now define VC dimension in a way analogous to how will define eluder
dimension. We begin with a notion of independence.
Definition 1. An action a is VC-independent of A? ? A if for any f, f? ? F there exists some f? ? F
? that is, f?(a) = f (a) and f?(?
?
which agrees with f on a and with f? on A;
a) = f?(?
a) for all a
? ? A.
?
Otherwise, a is VC-dependent on A.
By this definition, an action a is said to be VC-dependent on A? if knowing the values f ? F takes
on A? could restrict the set of possible values at a. This notion of independence is intimately related
to the VC dimension of a class of functions. In fact, it can be used to define VC dimension.
Definition 2. The VC dimension of a class of binary-valued functions with domain A is the largest
? {a}.
cardinality of a set A? ? A such that every a ? A? is VC-independent of A\
In the above example, any two actions are VC-dependent because knowing the label f? (a) of one
action could completely determine the value of the other action. However, this only happens if the
sampled action has label 1. If it has label 0, one cannot infer anything about the value of the other
action. Instead of capturing the fact that one could gain useful information through exploration, we
need a stronger requirement that guarantees one will gain useful information.
4.2
Defining Eluder Dimension
Here we define the eluder dimension of a class of functions, which plays a key role in our results.
Definition 3. An action a ? A is -dependent
on actions {a1 , ..., an } ? A with respect to F if any
qP
n
pair of functions f, f? ? F satisfying
(f (ai ) ? f?(ai ))2 ? also satisfies f (a) ? f?(a) ? .
i=1
Further, a is -independent of {a1 , .., an } with respect to F if a is not -dependent on {a1 , .., an }.
Intuitively, an action a is independent of {a1 , ..., an } if two functions that make similar predictions
at {a1 , ..., an } can nevertheless differ significantly in their predictions at a. The above definition
measures the ?similarity? of predictions at -scale, and measures whether
qP two functions make similar
n
predictions at {a1 , ..., an } based on the cumulative discrepancy
(f (ai ) ? f?(ai ))2 . This
i=1
measure of dependence suggests using the following notion of dimension.
Definition 4. The -eluder dimension dimE (F, ) is the length d of the longest sequence of elements
in A such that, for some 0 ? , every element is 0 -independent of its predecessors.
Recall that a vector space has dimension d if and only if d is the length of the longest sequence of
elements such that each element is linearly independent or equivalently, 0-independent of its predecessors. Definition 4 replaces the requirement of linear independence with -independence. This
extension is advantageous as it captures both nonlinear dependence and approximate dependence.
5
Confidence Bounds and Regret Decompositions
A key to our analysis is recent observation [16] that the regret of both Thompson sampling and a
UCB algorithm can be decomposed in terms of confidence sets. Define the width of a subset F? ? F
at an action a ? A by
wF? (a) = sup f (a) ? f (a) .
(1)
?
f ,f ?F
?
This is a worst?case measure of the uncertainty about the payoff f? (a) at a given that f? ? F.
5
Proposition 1. Fix any sequence {Ft : t ? N}, where Ft ? F is measurable with respect to ?(Ht ).
Then for any T ? N, with probability 1,
R(T, ? F1:? ) ?
T
X
[wFt (At ) + C1(f? ?
/ Ft )]
(2)
t=1
T
X
E R(T, ? TS ) ? E
[wFt (At ) + C1(f? ?
/ Ft )] .
(3)
t=1
If the confidence sets Ft are constructed to contain f? with high probability, this proposition essenPT
tially bounds regret in terms of the sum of widths t=1 wFt (At ). In this sense, the decomposition
bounds regret only in terms of uncertainty about the actions A1 ,..,At that the algorithm has actually
sampled. As actions are sampled, the value of f? (?) at those actions is learned accurately, and hence
we expect that the width wFt (?) of the confidence sets should diminish over time.
It is worth noting that the regret bound of the UCB algorithm ? F1:? depends on the specific confidence sets {Ft : t ? N} used by the algorithm whereas the bound of ? TS applies for any sequence
of confidence sets. However, the decomposition (3) holds only in expectation under the prior distribution. The implications of these decompositions are discussed further in earlier work [16].
In the next section, we design abstract confidence sets Ft that are shown to contain the true function
PT
f? with high probability. Then, in Section 7 we give a worst case bound on the sum t=1 wFt (At )
in terms of the eluder dimension of the class of functions F. When combined with Proposition 1,
this analysis provides regret bounds for both Thompson sampling and for a UCB algorithm.
6
Construction of confidence sets
The abstract confidence sets we construct are centered around least squares estimates f?tLS ?
Pt?1
2
arg minf ?F L2,t (f ) where L2,t (f ) =
squared predic1 (f (At ) ? Rt ) is the cumulative
?
1
LS
?
tion error. The sets take the form Ft := {f ? F : kf ? ft k2,Et ? ?t } where ?t is
an appropriately chosen confidence parameter, and the empirical 2-norm k?k2,Et is defined by
Pt?1
2
2
kgk2,Et = 1 g 2 (Ak ). Hence kf ? f? k2,Et measures the cumulative discrepancy between the
previous predictions of f and f? .
The following lemma is the key to constructing strong confidence sets (Ft : t ? N). For an arbitrary
function f , it bounds the squared error of f from below in terms of the empirical loss of the true
2
function f? and the aggregate empirical discrepancy kf ? f? k2,Et between f and f? . It establishes
that for any function f , with high probability, the random process (L2,t (f ) : t ? N) never falls
below the process (L2,t (f? ) + 12 kf ? f? k22,Et : t ? N) by more than a fixed constant. A proof of
the lemma is provided in the appendix. Recall that ? is a constant given in Assumption 2.
Lemma 1. For any ? > 0 and f : A 7? R,
1
2
2
P L2,t (f ) ? L2,t (f? ) + kf ? f? k2,Et ? 4? log (1/?) ?t ? N ? ? 1 ? ?.
2
By Lemma 1, with high probability, f can enjoy lower squared error than f? only if its empirical
2
deviation kf ? f? k2,Et from f? is less than 8? 2 log(1/?). Through a union bound, this property
holds uniformly for all functions in a finite subset of F. To extend this result to infinite classes of
functions, we measure the function class at some discretization scale ?. Let N (F, ?, k?k? ) denote
the ?-covering number of F in the sup-norm k ? k? , and let
p
?t? (F, ?, ?) := 8? 2 log (N (F, ?, k?k? )/?) + 2?t 8C + 8? 2 ln(4t2 /?) .
(4)
1
The results can be extended to the case where the infimum of L2,t (f ) is unattainable by selecting a function
with squared prediction error sufficiently close to the infimum.
6
Proposition 2. For all ? > 0 and ? > 0, if
Ft = f ? F :
f ? f?tLS
2,Et
for all t ? N, then
P f? ?
?
\
t=1
p
? ?t? (F, ?, ?)
!
Ft ? ? 1 ? 2?.
d
Example 3. Suppose ? ? [0, 1] and for each a ? A, f? (a) is an L?Lipschitz function of ?. Then
N (F, ?, k ? k? ) ? (1 + L/)d and hence log N (F, ?, k ? k? ) ? d log(1 + L/).
7
Measuring the rate at which confidence sets shrink
PT
Our remaining task is to provide a worst case bound on the sum 1 wFt (At ). First consider the
case of a linearly parameterized model where f? (a) := h?(a), ?i for each ? ? ? ? Rd . Then,
it can be shown that our confidence set takes the form Ft := {f? : ? ? ?t } where ?t ? Rd is
an ellipsoid. When an action At is sampled, the ellipsoid shrinks in the direction ?(At ). Here
the explicit geometric structure of the confidence set implies that the width wFt shrinks not only
at At but also at any other action whose feature vector is not orthogonal to ?(At ). Some linear
PT
algebra leads to a worst case bound on 1 wFt (At ). For a general class of functions, the situation
is much subtler, and we need to measure the way in which the width at each action can be reduced
by sampling other actions.
The following result uses our new notion of dimension to bound the number of times the width of
the confidence interval for a selected action At can exceed a threshold.
Proposition 3. If (?t ? 0|t ? N) is a nondecreasing sequence and Ft := {f ? F : kf ?
?
f?tLS k2,Et ? ?t } then with probability 1
T
X
1(wFt (At ) > ) ?
t=1
4?T
+ 1 dimE (F, )
2
for all T ? N and > 0.
PT
Using Proposition 3, one can bound the sum t=1 wFt (At ), as established by the following lemma.
To extend our analysis to infinite classes of functions, we consider the ?TF ?eluder dimension of F,
where
1
?tF = max 2 , inf {kf1 ? f2 k? : f1 , f2 ? F, f1 6= f2 } .
(5)
t
Lemma
2. If (?t ? 0|t ? N) is a nondecreasing sequence and Ft := {f ? F : kf ? f?tLS k2,Et ?
?
?t } then with probability 1, for all T ? N,
T
X
wFt (At ) ?
t=1
8
1
+ min dimE F, ?TF , T C + 4
T
q
dimE F, ?TF ?T T .
(6)
Main Result
Our analysis provides a new guarantee both for Thompson sampling, and for a UCB algorithm
?
? F1:? executed with appropriate confidence sets {Ft? : t ? N}. Recall, for a sequence of conF1:?
fidence sets {F
the UCB algorithm that chooses an action A?t ?
t : t ? N} we denote by ?
arg maxa?A supf ?Ft f? (a) at each time t. We establish bounds that are, up to a logarithmic
factor, of order
v
u
udimE F, T ?2 log N F, T ?2 , k?k? T .
t|
{z
}|
{z
}
Eluder dimension
log?covering number
7
This term depends on two measures of the complexity of the function class F. The first, which
controls for statistical over?fitting, grows logarithmically in the cover numbers of the function class.
This is a common feature of notions of dimension from statistical learning theory. The second
measure of complexity, the eluder dimension, measures the extent to which the reward value at one
action can be inferred by sampling other actions.
The next two propositions, which provide finite time bounds for a particular UCB algorithm and for
Thompson sampling, follow by combining Proposition 1, Propsition 2, and Lemma 2. Define,
q
1
B(F, T, ?) =
+ min dimE F, ?TF , T C + 4 dimE F, ?TF ?T? F, ?TF , ? T .
T
Notice that B(F, T, ?) is the right hand side of the bound (6) with ?T taken to be ?T? (F, ?TF , ?).
Proposition
4. Fix any ? > 0 and T ? N, and define for each t ? N, Ft? =
p
? ?t? (F, ?T , ?) . Then,
f ? F :
f ? f?tLS
2,Et
n
o
?
P R(T, ? F1:? ) ? B(F, T, ?) | ? ? 1 ? 2?
h
i
?
E R(T, ? F1:? ) | ? ? B(F, T, ?) + 2?T C
Proposition 5. For any T ? N,
E R(T, ? TS ) ? B(F, T, T ?1 ) + 2C
The next two examples show how the regret bounds of Proposition 4 and 5 specialize to ddimensional linear and generalized linear models. For each of these examples ? ? Rd and each
action is associated with a known feature vector ?(a). Throughout these two examples, we fix positive constants ? and s and assume that ? ? supa?A k?(a)k and s ? sup??? k?k. For each of these
examples, a bound on dimE (F, ) is provided in the supplementary material.
Example 4. Linear Models: Consider the case of a d-dimensional linear model f? (a) :=
h?(a), ?i. Then, dimE (F, ) = O(d log(1/))
Proposi? and log N (F, , k?k? ) =FO(d log(1/)).
?2
F
tions 4 and 5 therefore yield O(d log(1/?T ) T ) regret bounds. Since ?T ? T , This is tight to
within a factor of log T [3], and matches the best available bound for a linear UCB algorithm [2].
Example 5. Generalized Linear Models: Consider the case of a d-dimensional generalized linear model f? (a) := g (h?(a), ?i) where g is an increasing Lipschitz continuous func0
0
?
?
tion. Set h = sup?,a
Then,
? g (h?(a), ?i), h = inf ?,a
? g (h?(a), ?i) and r = h/h.
2
log N (F, , k?k? ) = O(d
log(h/))
and
dim
(F,
)
=
O(dr
log(h/)),
and
Propositions
4
and
E
?
F
5 yield O(rd log(h/?T ) T ) regret bounds. To our knowledge, this bound is a slight improvement
over the strongest regret bound available
? for any algorithm in this setting. The regret bound of
Filippi et al. [7] is of order rd log3/2 (T ) T .
9
Conclusion
In this paper, we have analyzed two algorithms, Thompson sampling and a UCB algorithm, in a
very general framework, and developed regret bounds that depend on a new notion of dimension.
In constructing these bounds, we have identified two factors that control the hardness of a particular
multi-armed bandit problem. First, an agent?s ability to quickly attain near-optimal performance
depends on the extent to which the reward value at one action can be inferred by sampling other
actions. However, in order to select an action the agent must make inferences about many possible
actions, and an error in its evaluation of any one could result in large regret. Our second measure
of complexity controls for the difficulty of maintaining appropriate confidence sets simultaneously
at every action. While our bounds are nearly tight in some cases, further analysis is likely to yield
stronger results in other cases. We hope, however, that our work provides a conceptual foundation
for the study of such problems, and inspires further investigation.
Acknowledgments
The first author is supported by a Burt and Deedee McMurty Stanford Graduate Fellowship. This
work was supported in part by Award CMMI-0968707 from the National Science Foundation.
8
References
[1] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit feedback. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), pages 355?366, 2008.
[2] Y. Abbasi-Yadkori, D. P?al, and C. Szepesv?ari. Improved algorithms for linear stochastic bandits. Advances
in Neural Information Processing Systems, 24, 2011.
[3] P. Rusmevichientong and J.N. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations
Research, 35(2):395?411, 2010.
[4] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of the 40th
ACM Symposium on Theory of Computing, 2008.
[5] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesv?ari. X-armed bandits. Journal of Machine Learning
Research, 12:15871627, 2011.
[6] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Information-theoretic regret bounds for Gaussian
process optimization in the bandit setting. Information Theory, IEEE Transactions on, 58(5):3250 ?3265,
may 2012. ISSN 0018-9448. doi: 10.1109/TIT.2011.2182033.
[7] S. Filippi, O. Capp?e, A. Garivier, and C. Szepesv?ari. Parametric bandits: The generalized linear case.
Advances in Neural Information Processing Systems, 23:1?9, 2010.
[8] Y. Abbasi-Yadkori, D. Pal, and C. Szepesv?ari. Online-to-confidence-set conversions and application to
sparse stochastic bandits. In Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[9] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985.
[10] T.L. Lai. Adaptive treatment allocation and the multi-armed bandit problem. The Annals of Statistics,
pages 1091?1114, 1987.
[11] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
learning, 47(2):235?256, 2002.
[12] O. Capp?e, A. Garivier, O.-A. Maillard, R. Munos, and G. Stoltz. Kullback-Leibler upper confidence
bounds for optimal sequential allocation. Submitted to the Annals of Statistics.
[13] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. The Journal of
Machine Learning Research, 99:1563?1600, 2010.
[14] P.L. Bartlett and A. Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly
communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35?42. AUAI Press, 2009.
[15] L. Kocsis and C. Szepesv?ari. Bandit based monte-carlo planning. In Machine Learning: ECML 2006,
pages 282?293. Springer, 2006.
[16] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. arXiv preprint arXiv:1301.2609,
2013.
[17] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence
of two samples. Biometrika, 25(3/4):285?294, 1933.
[18] S.L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business
and Industry, 26(6):639?658, 2010.
[19] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In Neural Information Processing
Systems (NIPS), 2011.
[20] S. Agrawal and N. Goyal. Analysis of Thompson sampling for the multi-armed bandit problem. 2012.
[21] S. Agrawal and N. Goyal. Further optimal regret bounds for Thompson sampling. arXiv preprint
arXiv:1209.3353, 2012.
[22] E. Kauffmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite time
analysis. In International Conference on Algorithmic Learning Theory, 2012.
[23] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. arXiv preprint
arXiv:1209.3352, 2012.
[24] A. Beygelzimer, J. Langford, L. Li, L. Reyzin, and R.E. Schapire. Contextual bandit algorithms with
supervised learning guarantees. In Conference on Artificial Intelligence and Statistics (AISTATS), volume 15. JMLR Workshop and Conference Proceedings, 2011.
[25] K. Amin, M. Kearns, and U. Syed. Bandits, query learning, and the haystack dimension. In Proceedings
of the 24th Annual Conference on Learning Theory (COLT), 2011.
[26] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine Learning Research, 3:397?422, 2003.
9
| 4909 |@word exploitation:2 norm:2 stronger:4 advantageous:1 simplifying:1 decomposition:4 boundedness:1 contains:1 selecting:1 chervonenkis:2 daniel:1 ours:1 past:1 contextual:7 discretization:1 beygelzimer:1 must:2 informative:1 enables:1 designed:1 update:2 intelligence:3 selected:3 provides:5 constructed:2 predecessor:2 symposium:1 specialize:1 fitting:2 introduce:1 manner:2 acquired:1 hardness:1 expected:5 roughly:1 behavior:2 planning:1 multi:9 decomposed:1 little:1 armed:10 cardinality:1 increasing:1 begin:2 provided:2 bounded:1 suffice:1 bonus:1 what:2 maxa:7 developed:1 unified:1 unobserved:1 guarantee:8 every:6 auai:1 growth:1 biometrika:1 k2:8 control:4 normally:2 enjoy:1 before:1 positive:1 understood:2 consequence:1 accumulates:1 ak:1 studied:4 suggests:1 graduate:1 statistically:1 russo:2 practical:1 acknowledgment:1 union:1 regret:26 goyal:3 empirical:6 attain:3 significantly:1 confidence:26 word:1 dime:8 cannot:1 close:2 selection:1 context:2 influence:2 risk:1 optimize:2 equivalent:1 deterministic:1 demonstrated:1 measurable:2 attention:2 duration:1 thompson:21 l:1 unify:1 simplicity:1 communicating:1 insight:1 rule:1 handle:2 notion:15 variation:1 increment:2 analogous:1 kauffmann:1 annals:2 pt:7 play:1 construction:1 suppose:1 elude:1 us:1 element:4 roy:2 satisfying:1 logarithmically:1 observed:3 ft:24 role:1 preprint:3 capture:6 worst:4 ensures:1 trade:1 observes:2 benjamin:1 complexity:10 reward:28 exhaustively:1 depend:7 tight:2 weakly:1 algebra:1 tit:1 upon:1 f2:3 completely:1 capp:2 effective:1 describe:1 monte:2 doi:1 artificial:3 query:1 aggregate:1 eluder:16 choosing:1 whose:1 stanford:7 valued:4 plausible:1 say:2 solve:1 otherwise:3 supplementary:1 ability:1 statistic:6 fischer:1 nondecreasing:2 online:1 kocsis:1 sequence:9 agrawal:3 combining:1 realization:1 reyzin:1 poorly:2 amin:2 exploiting:1 optimum:1 r1:1 requirement:2 generating:1 converges:2 tions:1 develop:1 received:2 strong:2 ddimensional:2 come:1 implies:1 differ:1 direction:2 closely:2 stochastic:6 vc:11 exploration:8 centered:2 material:1 pinned:1 require:1 fix:4 f1:8 investigation:1 mab:4 randomization:1 proposition:12 mathematically:1 exploring:2 extension:1 hold:5 around:2 diminish:2 sufficiently:1 normal:1 exp:2 great:2 scope:1 mapping:2 predict:1 algorithmic:1 achieves:1 purpose:1 integrates:1 label:3 robbins:1 agrees:1 largest:1 tf:8 establishes:1 hope:2 wft:11 dani:1 offs:1 gaussian:4 conf1:1 varying:1 focus:1 she:6 longest:2 improvement:1 likelihood:1 seeger:1 sense:2 wf:1 dim:1 inference:1 dependent:6 typically:2 unlikely:1 hidden:1 her:4 bandit:29 selects:3 arg:9 among:3 aforementioned:1 colt:2 development:1 art:1 special:2 initialize:1 construct:2 never:1 sampling:33 broad:2 look:1 nearly:1 minf:1 discrepancy:3 t2:1 few:1 employ:2 ortner:1 modern:1 simultaneously:1 national:1 interest:2 evaluation:2 introduces:1 analyzed:2 light:1 immense:1 implication:1 orthogonal:1 unless:1 tree:1 indexed:1 stoltz:2 theoretical:3 politician:1 uncertain:1 instance:1 industry:1 earlier:1 korda:1 cover:1 measuring:1 deviation:1 subset:4 successful:1 inspires:1 pal:1 dependency:1 unattainable:1 corrupted:1 combined:1 adaptively:1 chooses:1 st:1 fundamental:1 sensitivity:1 international:1 told:1 quickly:1 squared:4 abbasi:2 cesa:1 possibly:1 dr:1 li:2 filippi:2 rusmevichientong:2 notable:1 depends:6 piece:1 tion:2 view:1 optimistic:5 exogenous:1 sup:6 characterizes:1 bayes:1 kgk2:1 square:2 fidence:1 who:2 efficiently:1 yield:6 bayesian:2 accurately:1 carlo:2 worth:1 researcher:1 randomness:1 history:3 submitted:1 strongest:2 fo:1 definition:7 clearest:1 proof:1 associated:1 recovers:1 sampled:7 gain:2 treatment:1 popular:1 recall:3 knowledge:4 ut:1 maillard:1 actually:1 auer:3 attained:1 supervised:6 follow:1 improved:1 formulation:2 though:2 shrink:3 until:2 langford:1 hand:2 nonlinear:1 infimum:2 reveal:1 grows:2 k22:1 contain:2 djrusso:1 true:5 hence:3 regularization:1 leibler:1 jaksch:1 deal:2 width:6 encourages:1 essence:1 covering:4 anything:1 generalized:9 trying:1 tt:1 theoretic:1 recently:3 ari:5 common:4 ols:1 specialized:3 qp:2 volume:1 discussed:1 extend:2 slight:1 refer:1 multiarmed:1 haystack:1 ai:4 rd:6 mathematics:2 similarly:1 chapelle:1 similarity:1 posterior:4 closest:1 recent:3 inf:2 binary:2 continue:1 arbitrarily:1 impose:1 preceding:1 surely:1 converge:1 determine:1 strike:1 period:1 infer:1 exceeds:1 technical:1 match:2 long:3 lai:2 award:1 a1:9 prediction:7 involving:1 variant:1 expectation:3 metric:1 arxiv:6 sometimes:3 c1:2 background:1 whereas:1 fellowship:1 szepesv:5 interval:1 krause:1 appropriately:1 operate:1 goto:2 thing:1 call:1 near:3 noting:1 exceed:1 easy:2 independence:4 restrict:1 identified:1 knowing:2 proposi:1 whether:1 optimism:2 bartlett:1 cause:1 action:69 generally:1 useful:2 clear:1 governs:1 tewari:1 amount:1 reduced:1 schapire:1 notice:1 broadly:2 group:1 key:3 nevertheless:1 threshold:1 drawn:1 garivier:2 ht:17 asymptotically:2 sum:4 parameterized:2 uncertainty:5 almost:1 throughout:1 decision:1 appendix:1 capturing:1 bound:51 replaces:1 annual:2 adapted:1 kleinberg:1 optimality:1 min:2 relatively:1 according:2 intimately:1 kakade:2 happens:1 subtler:1 intuitively:1 taken:2 ln:1 previously:1 remains:1 discus:1 nonempty:1 available:7 operation:1 apply:2 appropriate:2 yadkori:2 remaining:1 maintaining:1 establish:2 already:2 quantity:3 realized:1 parametric:1 cmmi:1 dependence:6 rt:7 said:1 reporter:2 bvr:1 considers:2 extent:2 toward:1 kalman:1 length:2 index:3 relationship:1 insufficient:1 ellipsoid:3 balance:1 issn:1 equivalently:1 difficult:1 unfortunately:2 executed:1 potentially:1 implementation:2 design:1 policy:1 twenty:1 unknown:1 allowing:1 upper:5 conversion:1 observation:6 bianchi:1 markov:1 finite:7 t:3 ecml:1 payoff:3 grew:1 defining:2 extended:1 situation:1 discovered:1 supa:1 arbitrary:3 regal:1 inferred:3 burt:1 pair:2 connection:2 slivkins:1 kf1:1 learned:1 established:3 nip:1 address:3 beyond:1 below:3 scott:1 including:1 max:4 greatest:1 syed:1 natural:1 difficulty:1 business:1 predicting:1 indicator:1 arm:4 representing:1 improve:1 mdps:1 identifies:1 faced:1 prior:5 literature:6 l2:7 geometric:1 kf:8 loss:1 expect:1 highlight:3 interesting:1 filtering:1 allocation:3 tially:1 foundation:2 upfal:1 degree:1 agent:10 principle:1 share:1 balancing:1 supported:2 tsitsiklis:2 guide:1 side:1 fall:1 face:1 taking:1 munos:3 fifth:1 sparse:2 van:2 distributed:3 feedback:1 dimension:34 avoids:1 cumulative:3 author:2 reinforcement:3 adaptive:2 log3:1 transaction:1 approximate:2 emphasize:1 kullback:1 keep:2 sequentially:1 hayes:1 conceptual:2 continuous:2 search:1 tailed:1 why:2 thankfully:1 learn:4 ca:2 constructing:2 domain:1 aistats:2 main:1 linearly:5 noise:5 allowed:1 referred:1 tl:5 sub:2 position:2 exceeding:1 explicit:1 lie:2 jmlr:1 learns:1 down:1 specific:3 xt:4 learnable:1 concern:1 evidence:1 exists:1 workshop:1 vapnik:2 sequential:1 effectively:2 conditioned:2 illustrates:1 horizon:2 supf:1 logarithmic:2 simply:1 likely:1 bubeck:1 applies:4 springer:1 satisfies:3 acm:1 conditional:1 goal:2 lipschitz:3 exceptionally:1 hard:1 infinite:4 except:1 uniformly:4 lemma:7 kearns:1 called:3 ucb:22 select:4 internal:1 support:1 srinivas:1 |
4,320 | 491 | SINGLE NEURON MODEL: RESPONSE TO WEAK
MODULATION IN THE PRESENCE OF NOISE
A. R. Bu/,ara and E. W. Jaco6,
Naval Ocean Syat.em.a Cenw, Materials Reaean:h Branch, San Diego, CA 92129
F.Mou
Physics Dept.., Univ. of Missouri, St. Louis, MO 63121
ABSTRACT
We consider a noisy bist.able single neuron model driven by a periodic
external modulation. The modulation introduces a correlated switching
between st.ates driven by the noise . The information flow through the system from the modulation to the output switching events, leads to a succession of strong peaks in the power spectrum. The signal-to-noise ratio (SNR)
obtained from this power spectrum is a measure of the information content
in the neuron response . With increasing noise intensity, the SNR passes
t.hrough a maximum, an effect which has been called stochastic resonance.
We treat t.he problem wit.hin the framework of a recently developed approximate theory, valid in the limits of weak noise intensity, weak periodic forcing and low forcing frequency. A comparison of the results of this theory
with those obtained from a linear syst.em FFT is also presented .
INTRODUCTION
Recently, there has been an upsurge of interest in s1ngie or few-neuron nonlinear
dynamics (see e.g. Li and Hopfield~ 1989; Tuckwell, 1988; Paulus, Gass and Mandell, 1990;
Aihara, Takake and Toyoda, 1990). However, the precise relationship between the manyneuron connected model and a single effect.ive neuron dynamics has not been examined in
detail. Schieve, Bulsara and Davis (1991) have considered a network of N symmetrically
interconnected neurons embodied} for example in the "connectionist." models of Hopfield
(1982, 1984) or Shamma (1989) \the latter corresponding to a mammalian auditory network). Through an adiabatic elimination procedure, they have obtained, in closed form, the
dynamics of a single neuron from the system of coupled differential equations describing the
N-neuron problem . The problem has been treated both deterministically and stochastically
(through the inclusion of additive and multiplicative noise terms). It. is important. to point.
out that the work of Schieve, Bulsara, and Davis does not include a prion' a self-coupling
term, although the inclusion of such a term can be readily implemented in their theory; this
has been done by Bulsara and Schieve (1991) . Rather, t.heir theory results in an explicit.
form of t.he self-coupling term, in terms of the parameters of the remaining neurons in the
net.work . This term, in effect, renormalizes the self-coupling t.erm in the Shamma and Hopfield models. The reduced or "effect.ive" neuron model is expected to reproduce some of the
gross features of biological neurons. The fact that simple single neuron models, such as the
model t.o be considered in this work, can indeed reproduce several feat.ures observed in bi~
logical experiments has been strikingly demonstrated by Longtin, Bulsara and Moss (1991)
through their construction of the inter-spike-interval histograms (ISIHs) using a Schmidt
trigger to model the neuron. The results of their simple model agree remarkably well with
data obtained in two different experiments (on the auditory nerve fiber of squirrel monkey
(Rose, Brugge, Andersen and Hind, 1967) and on the cat visual cort.ex (Siegal, 1990)),
In this work, we consider such a "reduced" neural element subject to a weak periodic
external modulation . The modulation int.roduces a correlat.ed switching between the
67
68
Buisara, Jacobs, and Moss
bistable states, driven by the noise with the signal-to-noise ratio (SNR) obtained from the
power spectrum, being taken as a measure of the information content in the neuron
response. As the additive noise variance increases, the SNR passes through a maximum.
This effect has been called wstochastic resonance and describes a phenomenon in which the
noise actually enhances the information content, i.e., the observability of the signal. Stochastic resonance has been observed in a modulated ring laser experiment (McNamara,
Wiesenfeld and Roy, 1988; Vemuri and Roy, 1989) as well as in electron paramagnetic resonanCe experiments (Gammaitoni, Martinelli, Pardi and Santucci, 1991) and in a modulated
magnetoselastic ribbon (Spano and Ditto, 1991). The introduction at multiplicative noise
(in the coefficient of the sigmoid transfer function) tends to degrade this effect.
W
THE MODEL; STOCHASTIC RESONANCE
The reduced neuron model consists of a single Hopfield-type computational element,
which may be modeled as a R-C circuit with nonlinear feedback provided by an operational
amplifier having a sigmoid transfer function. The equation (which may be rigorously
derived from a fully connected network model as outlined in the preceding section) may be
cast in the form,
i + a x - b ta.nhx = Xo+ F(t),
(1)
where F( tJ is Gaussian delta-correlated noise with zero mean and variance 2D, Xo bein9 a
dc input (which we set equal to zero for the remainder of this work) . An analysis of
including multiplicative noise effects, has been given by Bulsara, Boss and Jacobs (1989 .
For the purposes of the current work , we note that the neuron may be treated as a partic e
in a one-dimensional potential given by,
lIl'
U(x)
a x
= -22
b In cosh x ,
(2)
x being the one-dimensional state variable representing the membrane potential . In
eral, the coefficients a and b depend on the details of the interaction of our reference
ron to the remaining neurons in the network (Schieve, Bulsara and Davis, 1990).
potential described by (2) is bimodal for '7 > 1 With the extrema occurring at (we set
throughout the remainder of this work),
c=o,
? [1-
genneuThe
a=1
(3)
1-ta.nhb j::=:bta.nhb,
1- b sech 2 b
the approximation holding for large b. Note that the N-shaped characteristic inherent in
the firing dynamics derived from the Hodgkin-Huxley equations (Rinzel and Ermentrout,
1990) is markedly similar to the plot of dV/dx vs. x for the simple bistable system (1).
For a stationary potential , and for D? Vo where Vo is the depth of the deterministic
potential, the probability that a switching event will occur in unit time, i.e. the switching
rate, is given by the Kramers frequency (Kramers, 1940),
'.= \D
l.
dy .xp (U(y)/ D) ( , duxp (- U(z)/ D)
r.
(40)
which, for small noise, may be cast in the form (the local equilibrium assumption of Kramers),
ro::=: (271"r l
ll V(21(0) I V(21(c)]1/'2
exp (- V o/ D),
(4b)
where V(2}(x) == d V /dx 2.
We now include a periodic modulation term esinwt on the right-hand-side of (1) (note
that for ?2(b-1)3/(3b) one does not observe SWitchin in the noise-free system) . This
leads to a modulation (i .e. rocking) of the potential 2) with time: an additional term
- xesinwt is now present on the right-hand-side of (2). n this case, the Kramers rate (4)
becomes time-dependent:
2
q
r(t)::=:roexp(-Xisinwt/D),
(5)
which is accurate only for e? Vo and w? {VI21(?c )}1/'2 . The latter condition is referred to
as the adiabatic approximation. It ensures that the probability density corresponding to
Single Neuron Model: Response
to
Weak Modulation in the Presence of Noise
the time-modulated potential is approximately stationary (the modulation is slow enough
that the instantaneous probability density can "adiabatically" relax to a succession of
quasi-stationary states) .
We now follow the work of McNamara and Wiesenfeld (1989), developing a two-state
model by introducing a probability of finding the system in the left or right well of the
potential. A rate equation is constructed based on the Kramers rate r(t) given by (5).
Within the framework of the adiabatic approximation, this rate equation may be integrated
to yield the time-dependent conditional probability density function for finding the system
in a given well of the potential. This leads directly to the autocorrelation function
< :z:{t) :z:{t + 1') > and finally, via the Wiener-Khinchine theorem, to the power spectral density P(O). The details are given by Bulsara, Jacobs, Zhou, Moss and Kiss (1991) :
P 0
()
=
[ 1-
2rg f 2C 2
1[
D2{4rl+(2)
8c2ro
4rl+02
1+
47rc 4 rg f2
6 w- 0
D2(4rg+02) (
),
(6)
where the first term on the right-hand-side represents the noise background, the second
term being the signal strength. Taking into account the finite bandwidth of the measuring
syste~, ~e replace {for the I?urpose of compariso~ with e~perimental results} t~e .deltafunctlOn m (6) by the quantity (.6w)-1 where .6w IS the Width of a frequency bm m the
(experimental) Fourier transformation. We introduce signal-to-noise ratio SNR = 10 log R in
decibels, where R is given by
1 D24~Cr~~f:2)
R == +
(.6w)-1
[1-
D2
~:~t:2w2)
r
J
[
4r~gc~~2l ?
(7)
In writing down the above expressions, the approximate Kramers rate (4b) has been used .
However, in what follows, we discuss the effects of replacing it by the exact expression (4a).
The location of the maximum of the SNR is found by differentiating the above equation; It
depends on the amplitude f and the frequency w of the modulation, as well as the additive
noise variance D and the parameters a and b in the potential.
The SNR computed via the above expression increases as the modulation frequency is
lowered relative to the Kramers frequency . Lowering the modulation frequency also sharpens the resonance peak, and shifts it to lower noise values, an effect that has been demonstrated, for example, by Bulsara, Jacobs, Zhou, Moss and Kiss (1991). The above may be
readily explained. The effect of the weak modulating signal is to alternately raise and lower
the potential well with respect to the barrier height Vo. In the absence of noise and for
l ?
V o, the system cannot switch states, i.e. no information is transferred to the output. In
the presence of noise, however, the system can switch states through stochastic activation
over the barrier . Although the switching process is statistical, the transition probability is
periodically modulated by the external signal. Hence, the output will be correlated, to some
degree, with the input signal (the modulation "clocks" the escape events and the whole process will be optimized if the noise by itself produces, on average, two escapes within one
modulation cycle) .
Figure 1 shows the SNR as a function of the noise variance 2D . The potential barrier
height Vo:;;:: 2.4 for the b = 2.5 case considered. Curves corresponding to the adiabatic expression (7), as well as the SNR obtained through an exact (numerical) calculation of the Kramers rate, using (4a) are shown, along with the data points obtained via direct numerical
simulation of (1). The Kramers rate at the maximum (2D ~ V o) of the SNR curve is 0.72.
This is much greater than the driving frequency w = 0.0393 used in this plot. The curve
computed using the exact expression (4a) fits the numerically obtained data points better
than the adiabatic curve at high noise strengths . This is to be expected in light of the
approximations used in deriving (4b) from (4a). Also, the expression (6) has been derived
from a two-state theory (taking no account of the potential) . At low noise, we expect the
two-state theory to agree with the actual system more closely . This is reflected in the resonance curves of figure 1 with the adiabatic curve differing (at the maximum) from the data
points by approximately Idb. We reiterate that the SNR, as well as the agreement between
the data points and the theoretical curves improves as the modulation frequency is lowered
relative to the Kramers rate (for a fixed frequency this can be achieved by changing the
potential barrier height via the parameters a and b in (2)). On the same plot, we show the
SNR obtained by computing directly the Fourier transform of the signal and noise. At very
69
70
Bulsara, Jacobs, and Moss
low noise, the Mideal linear filter Myields results that are considerably better than stochastic
resonance. However, at moderate-to-high noise, the stochastic resonance, which may be
looked upon as a Mnonlinear filter M, offers at least a 2.5db improvement for the parameters
of the figure. As indicated above, the improvement in performance achieved by stochastic
resonance over the "ideal linear filterM may be enhanced by raising the Kramers frequency
of the nonlinear filter relative to the modulation frequency w. In fact, as long as the basic
conditions of stochastic resonance are realized, the nonlinear filter will outperform the best
linear filter except at very low noise.
ZZ.5
20.0
o
o
o
o
co
~
a:
Fig 1. SNR using adiabatic theory,
eqn. (7), with (b ,w,e)=
(2.5,0.0393,0.3) and ro given
by (4b) (solid curve) and (4a)
(dotted curve) . Data points
correspond to SNR obtained
via direct simulation of (1)
(frequency resolution =6.1x 10-6 Hz).
Dashed curve corresponds to best
possible linear filter (see text) .
0
o
o
17.5
o
zC/)
~ ..................
15.0
"- ,
....
..............
' .... ........
.... ....
12.5+----+---+--_--+----+----<----<--......
-.:..
0.00
1.Z5
Z.50
3. 75
5.00
Noise Variance 2D
Multiplicative Noise Effects
We now consider the case when the neuron is exposed to both additive and multiplicative noise. In this case, we set b(t) = bo+ ?(t) where
(8)
<?(t? =0, < ?(t) ?(s) > =2Dm ott - s) .
In a real system such fluctuations might arise through the interaction of the neuron with
other neurons in the network or with external fluctuations. In fact, Schieve, Bulsara and
Davis (1991) have shown that when one derives the Mreduced M neuron dynamics in the
form (1) from a fully connected N-neuron network with fluctuating synaptic couplings, then
the resulting dynamics contain multiplicative noise terms of the kind being discussed here.
Even Langevin noise by itself can introduce a pitchfork bifurcation into the long-time
dynamics of such a reduced neuron model under the appropriate conditions (Bulsara and
Schieve, 1991). In an earlier pUblication (Bulsara, Boss and Jacobs, 1989), it was shown
that these fluctuations can qualitatively alter the behavior of the stationary probability
density function that describes the stochastic response of the neuron. In particular, the
multiplicative noise may induce additional peaks or erase peaks already present in the density (see for example Horsthemke and Lefever 1984) . In this work we maintain Dm sufficiently small that such effects are absent.
In the absence of modulation, one can write down a Fokker Planck equation for the
probability density function p (:z; ,t) describing the neuron response:
E.1!.
a
1 a2
at=-a;[a(x)p!+z ax
2
[.8(x)pj,
(9)
where
a(x) == - x + botanhx + Dm tanhx sech 2 x,
.8(x);: 2(D + Dm tanh 2 x)
I
(10)
D being the additive noise intensity . In the steady state, (9) may be solved to yield a
Mmacroscopic potential" function analogous to the function U(:z;} defined in (2):
U (x)
=- 2f
&~
.8( z) dx + In .8(:z;) .
(11)
Single Neuron Model: Response
to
Weak Modulation in the Presence of Noise
From (11), one obtains the turning points of the potential through the solution of the transcendental equation
x - bo tanhx + Dm tanhx sech 2x = 0 .
(12)
The modified Kramers rate, rpm, for this x-dependent diffusion process has been derived by
Englund, Snapp and Schieve lI984):
rOm
=
~
[ U(2 1(Xl) I U(21(0) I p'" exp [ U(XI) 2,..
U(O) I,
(13)
where the maximum of the potential occurs at x=o and the left minimum occurs at Z=ZI'
If we now assume that a weak sinusoidal modulation ?sinwt is present, we may once
again introduce this term into the potential as in the preceding case, again making the adiabatic approximation . We easily obtain for the modified time-dependent Kramers rate,
r ?(t) =
If..Ql [ U(21(xI) I U(21(0) III", exp
[ U(XI) - U(O) ? 2
4,..
(0
0
?SID(' w) t
(3 z
dz ].
(14)
Following the same procedure as we used in the additive noise case, we can obtain the ratio
R = 1 + S / l::lw N, for the case of both noises being present. The result is,
R
= 1 + 7r"l0'102(.6w) -I
[1 -
2"1lr7J
2
2
"10
+ '10
]-1
'
(15)
where,
(16a)
and
'1
-~J,~o dz -
0='0
(3(z)
2(D
?
+ Dm )
[xl+ml/2 tan- l (m l/2 tanhx l )],
(16b)
20
Fig 2. Effect of multiplicative
noise, eqn . (15). (b ,w,?) =
(2,0.31,0.4) and Dm =0 (top
curve), 0.1 (middle curve) and
0.2 (bottom curve) .
EO
~
10
a:
Z
Cf)
o
0.3
0 .9
0 .6
1.2
1.5
D
In figure 2 we show the effects of both additive and multiplicative noise by plotting
the SNR for a fixed external frequency w=O.31 with (b o, ?) = (2,0.4) as a function of the
additive noise intensity D. The curves correspond to different values of Dm, with the uppermost curve corresponding to Dm =0, i.e ., for the case of additive noise only . We note that
increasing Dm leads to a decrease in the SNR as well as a shift in its maximum to lower
values of D. These effects are easily explained using the results of Bulsara, Boss and Jacobs
71
72
Buisara, Jacobs, and Moss
(1989), wherein it was shown that the effect of mUltiplicative noise is to decrease, on average, the potential barrier height and to shift the locations of the stable steady states. This
leads to a degradation of the stochastic resonance effect at large Dm while shifting the location of the maximum toward lower D .
THE POWER SPECTRUM
We turn now to the power spectrum obtained via direct numerical simulation of the
dynamics (1). It is evident that a time series obtained by numerical simulation of (1) would
display SWitching events between the stable states of the potential, the residence time in
each state being a random variable. The intrawell motion consists of a random component
superimposed on a harmonic component, the latter increasing as the amplitude i of the
modulation increases. In the low noise limit, the deterministic motion dominates . However,
the adiabatic theory used in deriving the expressions (6) and (7) is a two-state theory that
simply follows the switching events between the states but takes no account of this
intrawell motion. Accordingly, in what follows, we draw the distinction between the full
dynamics obtained via direct simulation of (1) and the "equivalent two-state dynamics"
obtained by passing the output through a two-state filter. Such a filter is realized digitally
by replacing the time series obtained from a simulation of (1) with a time series wherein
the x variable takes on the values x = ? c, depending on which state the system is in . Figure 3 shows the power spectral density obtained from this equivalent two-state system. The
cop curve represents the signal-free case and the bottom curve shows the effects of turning
on the signal . Two features are readily apparent :
50 00
35 75
CD
~
Fig 3. Power spectral density via
direct simulation of (1).
a:
zV)
(b ,w ,< ,2D)
= (1.6056,0.03,0.65,0.25).
Bottom curve : <=0 case .
7 2S
-700L---~~ -===::::::::;;;;;;~~~
__
000
O . O~
O. OB
lrequency
011
0.15
(Hz)
l. The power spectrum displays odd harmonics of the modulation; this is a hallmark of stochastic resonance (Zhou and Moss, 1990) . If one destroys the symmetry of the potential (1)
(through the introduction of a small de driving term, for example), the even harmonics of
the modulation appear.
2. The noise floor is lowered when the signal is turned on. This effect is particularly striking
In the two-state dynamics. It stems from the fact that the total area under the spectral
density curves in figure 3 (i .e. the total power) must be conserved (a consequence of
Parseval's theorem). The power in the signal spikes therefore grows at the expense of the
background noise power. This is a unique feature of weakly modulated bistable noisy systems of the type under consideration in this work, and ~raPhiCallY illustrates the ability of
noise to assist information flow to the output (the signal. The effect may be quantified on
examining equ~tion (6) a~ove . The noise power spectra ~density (reJ?resented by the first
Cerm on the nght-hand-slde) decreases as the term 2ro?2c2{D2(4rg +(2)}-1 approaches
unity . This reduction in the noise floor is most pronounced when the signal is of low frequency (compared to the Kramers rate) and large amplitude . A similar, effect may be
observed in the spectral density correspondin~ to the full system dynamics . In this case, the
total power is only approximately conserved tin a finite bandwidth) and the effect is not so
Single Neuron Model: Response to Weak Modulation in the Presence of Noise
pronounced.
DISCUSSION
In this paper we have presented the details of a cooperative stochastic process that
occurs in nonlinear systems subject to weak deterministic modulating signals embedded in
a white noise background. The so-called "stochastic resonance" phenomenon may actually
be interpreted as a noise-assisted flow of information to the output. The fact that such simple nonlinear dynamic systems (e.g. an electronic Schmidt trigger) are readily realizeable in
hardware, points to the possible utility of this technique (far beyond the application to signal processing in simple neural networks) as a nonlinear filter. We have demonstrated that,
by suitably adjusting the system parameters (in effect changing the Kramers rate), we can
optimize the response to a given modulation frequency and background noise. In a practical
system, one can move the location and height of the bell-shaped response curve of figure 1
by changing the potential parameters and, possibly, infusing noise into the system. The
noise-enhancement of the SNR improves with decreasing frequency. This is a hallmark of
stochastic resonance and provides one with a possible filtering technique at low frequency .
It is important to point out that all the effects reported in this work have been reproduced
via analog simulations (Bulsara, Jacobs, Zhou, Moss and Kiss, 1991: Zhou and Moss, 1990).
Recently a new approach to the processing of information in noisy nonlinear dynamic systems, based on the probability density of residence times in one of the stable states of the
potential, has been developed by Zhou, Moss and Jung (1990). This technique, which offers
an alternative to the FFT, was applied by Longtin, Moss and Bulsara (1991) in their construction of the inter-spike-interval histograms that describe neuronal spike trains in the
central nervous system . Their work points to the important role played by noise in the
procesing of information by the central nervous system. The beneficial role of noise has
already been recognized by Buhmann and Schulten (1986, 87). They found that noise, deliberately added to the deterministic equations governing individual neurons in a network
significantly enhanced the network's performance and concluded that " ... the noise ... is an
essential feature of the information processing abilities of the neural network and not a
mere SOurce of disturbance better suppressed ... "
Acknowledgements
This work was carried out under funding from the Office of Naval Research grant nos.
NOOOI4-90-AF-OOOOI and NOOOOI4-90-J-1327 .
References
Aihara K., Takake T., and Toyoda M., 1990; "Chaotic Neural Networks", Phys. Lett.
A144, 333-340.
Buhmann J., and Schulten K., 1986; "Influence of Noise on the Behavior of an Autoassociative Neural Network", in J. Denker (ed) Neural networks for Computing (AlP conf. procedings, vol 151).
Buhmann J ., and Schulten K., 1987; "Influence of Noise on the Function of a "Physiological" Neural Network", BioI. Cyber. 56,313-327.
Bulsara A., Boss R. and Jacobs E., 1989; "Noise Effects in an Electronic Model of a Single
Neuron", BioI. Cyber. 61, 212-222.
Bulsara A., Jacobs E., Zhou T., Moss F. and Kiss L., 1991; "Stochastic Resonance in a Single Neuron Model: Theory and Analog Simulation", J. Theor. BioI. 154, 5~1-555.
Bulsara A. and Schieve W., 1991; "Single Effective Neuron: Macroscopic Potential and
Noise-Induced Bifurcations?, Phys. Rev . A, in press.
Englund J., Snapp R., Schieve W., 1984; "Fluctuations, Instabilities and Chaos in the
Laser-Driven Nonlinear Ring Cavity", in E. Wolf (ed) Progress in Optics, vol XXI. (North
Holland, Amsterdam) .
73
74
Bulsara, Jacobs, and Moss
Gammaitoni L., Martinelli M., Pardi L., and Santucci S., 1991; MObservation of Stochastic
Resonance in Bistable Electron Paramagnetic Resonance Systems M, preprint.
Hopfield J ., 1982; MNeural Networks and Physical Systems with Emergent Computational
Capabilities M, Proc. Natl. Acad . Sci. 79, 2554-2558.
Hopfield J ., 1984; MNeurons with Graded Responses have Collective Computational Abilities like those of Two-State Neurons M, Proc. Natl. Acad . Sci., 81, 3088-3092.
Horsthemke W., and Lefever R., 1984; Noise-Induced Transitions. (Springer-Verlag, Berlin) .
Kramers H., 1940; MBrownian Motion in a Field of Force and the Diffusion Model of Chemical Reactions M, Physica 1, 284-304 .
Li Z. , and Hopfield J ., 1989; "Modeling the Olfactory Bulb and its Neural Oscillatoy Processings M, BioI. Cyber. 61, 379-392.
Longtin A., Bulsara A., and Moss F., 1991; MTime-Interval Sequences in Bistable Systems
and the Noise-Induced Transmission of Information by Sensory Neurons M, Phys. Rev. Lett.
67,656-659 .
McNamara B., Wiesenfeld K., and Roy R., 1988; "Observation of Stochastic Resonance in a
Ring Laser", Phys. Rev. Lett. 60, 2626-2629.
McNamara B., and Wiesenfeld K., 1989; "Theory of Stochastic Resonance M, Phys. Rev.
A39, 4854-4869.
Paulus M., Gass S., and Mandell A., 1990;
works M, Physica D40, 135-155.
M
A Realistic Middle-Layer for Neural Net-
Rinzel J ., and Ermentrout B., 1989; Analysis of Neural Excitability and Oscillations", in
Methods in Neuronal Modeling, eds. C. Koch and I. Segev (MIT Press, Cambridge, MA) .
M
Rose J ., Brugge J., Anderson D., and Hind J., 1967; ?Phase-locked Response to Lowfrequency Tones in Single Auditory Nerve Fibers of the Squirrel Monkey", J . Neurophysial., 30, 769-793.
Schieve W., Bulsara A. and Davis G., 1990; "Single Effective Neuron", Phys. Rev. A43
2613-2623.
Shamma S., 1989; MSpatial and Temporal Processing in Central Auditory Networks", in
Methods in Neuronal Modeling, eds. C. Koch and I. Segev (MIT Press, Cambridge, MA).
Siegal R.,1990; MNonlinear Dynamical System Theory and Primary Visual Cortical Processing", Physica 42D, 385-395.
Spano M ., and Ditto W., 1991; MExperimental Observation of Stochastic R Resonance in a
Magnetoelastic Ribbon", preprint.
Tuckwell H., 1989; "Stochastic Processes in the Neurosciences", (SIAM, Philadelphia).
Vemuri G., and Roy R., 1990; MStochastic Resonance in a Bistable Ring Laser N, Phys. Rev.
A39, 4668-4674.
Zhou T., and Moss F., 1990; Analog Simulations of Stochastic Resonance M, Phys. Rev.
A41, 4255-4264.
M
Zhou T., Moss F., and Jung P., 1991; MEscape-Time Distributions of a Periodically Modulated Bistable System with Noise", Phys . Rev. A42, 3161-3169.
| 491 |@word middle:2 sharpens:1 suitably:1 d2:4 simulation:10 jacob:12 lowfrequency:1 solid:1 reduction:1 series:3 correspondin:1 cort:1 reaction:1 current:1 paramagnetic:2 bta:1 activation:1 dx:3 must:1 readily:4 transcendental:1 additive:9 periodically:2 numerical:4 realistic:1 heir:1 mandell:2 rinzel:2 plot:3 v:1 stationary:4 nervous:2 tone:1 accordingly:1 correlat:1 provides:1 ron:1 location:4 ditto:2 height:5 rc:1 along:1 constructed:1 direct:5 differential:1 c2:1 consists:2 autocorrelation:1 olfactory:1 introduce:3 inter:2 indeed:1 expected:2 behavior:2 ara:1 adiabatically:1 decreasing:1 actual:1 increasing:3 becomes:1 provided:1 erase:1 circuit:1 what:2 kind:1 interpreted:1 monkey:2 developed:2 differing:1 extremum:1 finding:2 transformation:1 temporal:1 ro:3 unit:1 grant:1 appear:1 louis:1 planck:1 local:1 treat:1 tends:1 limit:2 consequence:1 switching:8 acad:2 syste:1 firing:1 modulation:25 approximately:3 fluctuation:4 might:1 examined:1 quantified:1 co:1 shamma:3 bi:1 locked:1 unique:1 practical:1 chaotic:1 procedure:2 area:1 bell:1 significantly:1 induce:1 cannot:1 martinelli:2 influence:2 writing:1 instability:1 optimize:1 equivalent:2 deterministic:4 demonstrated:3 dz:2 resolution:1 wit:1 gammaitoni:2 deriving:2 analogous:1 diego:1 construction:2 trigger:2 enhanced:2 exact:3 tan:1 agreement:1 element:2 roy:4 particularly:1 mammalian:1 cooperative:1 observed:3 bottom:3 role:2 preprint:2 solved:1 ensures:1 connected:3 cycle:1 decrease:3 gross:1 rose:2 digitally:1 ermentrout:2 rigorously:1 dynamic:14 depend:1 raise:1 weakly:1 ove:1 exposed:1 upon:1 f2:1 strikingly:1 easily:2 hopfield:7 emergent:1 cat:1 fiber:2 laser:4 univ:1 train:1 describe:1 effective:2 apparent:1 ive:2 relax:1 ability:3 transform:1 noisy:3 itself:2 cop:1 reproduced:1 sequence:1 net:2 interconnected:1 interaction:2 remainder:2 turned:1 pronounced:2 hrough:1 enhancement:1 transmission:1 siegal:2 produce:1 wiesenfeld:4 ring:4 coupling:4 depending:1 odd:1 progress:1 strong:1 implemented:1 bulsara:21 closely:1 filter:9 stochastic:21 alp:1 bistable:7 material:1 elimination:1 biological:1 theor:1 assisted:1 squirrel:2 physica:3 sufficiently:1 considered:3 koch:2 exp:3 equilibrium:1 mo:1 electron:2 driving:2 a2:1 purpose:1 proc:2 tanh:1 modulating:2 uppermost:1 mit:2 destroys:1 gaussian:1 idb:1 modified:2 rather:1 zhou:9 cr:1 partic:1 publication:1 office:1 derived:4 ax:1 l0:1 naval:2 improvement:2 superimposed:1 bos:4 renormalizes:1 mou:1 dependent:4 integrated:1 reproduce:2 quasi:1 resonance:23 bifurcation:2 equal:1 once:1 field:1 having:1 shaped:2 zz:1 represents:2 toyoda:2 alter:1 connectionist:1 inherent:1 few:1 escape:2 missouri:1 individual:1 phase:1 maintain:1 amplifier:1 interest:1 ribbon:2 introduces:1 light:1 tj:1 natl:2 accurate:1 theoretical:1 earlier:1 modeling:3 measuring:1 ott:1 introducing:1 snr:17 mcnamara:4 examining:1 reported:1 periodic:4 considerably:1 st:2 density:13 peak:4 siam:1 pitchfork:1 bu:1 physic:1 andersen:1 again:2 central:3 possibly:1 external:5 stochastically:1 conf:1 li:2 syst:1 account:3 potential:24 sinusoidal:1 de:1 resented:1 north:1 int:1 coefficient:2 depends:1 reiterate:1 multiplicative:10 tion:1 closed:1 capability:1 wiener:1 variance:5 characteristic:1 succession:2 yield:2 correspond:2 weak:10 sid:1 mere:1 phys:9 ed:5 synaptic:1 frequency:18 dm:11 auditory:4 adjusting:1 noooi4:1 logical:1 improves:2 brugge:2 amplitude:3 actually:2 nerve:2 ta:2 follow:1 reflected:1 response:12 wherein:2 done:1 anderson:1 governing:1 clock:1 hand:4 eqn:2 replacing:2 nonlinear:9 indicated:1 grows:1 effect:25 contain:1 deliberately:1 hence:1 chemical:1 excitability:1 tuckwell:2 white:1 ll:1 self:3 width:1 davis:5 steady:2 evident:1 vo:5 motion:4 hin:1 hallmark:2 harmonic:3 instantaneous:1 consideration:1 recently:3 funding:1 chaos:1 sigmoid:2 rl:2 physical:1 discussed:1 he:2 analog:3 numerically:1 cambridge:2 outlined:1 inclusion:2 lowered:3 stable:3 sech:3 moderate:1 driven:4 forcing:2 verlag:1 a41:1 conserved:2 minimum:1 additional:2 greater:1 preceding:2 floor:2 eo:1 recognized:1 signal:17 dashed:1 branch:1 full:2 stem:1 calculation:1 offer:2 long:2 af:1 z5:1 basic:1 longtin:3 histogram:2 bimodal:1 achieved:2 xxi:1 background:4 remarkably:1 ures:1 interval:3 source:1 concluded:1 macroscopic:1 w2:1 pass:2 markedly:1 subject:2 hz:2 cyber:3 db:1 induced:3 flow:3 presence:5 symmetrically:1 ideal:1 iii:1 enough:1 fft:2 switch:2 fit:1 zi:1 bandwidth:2 observability:1 shift:3 absent:1 expression:7 utility:1 assist:1 passing:1 autoassociative:1 cosh:1 hardware:1 reduced:4 outperform:1 dotted:1 delta:1 neuroscience:1 write:1 vol:2 zv:1 changing:3 pj:1 diffusion:2 lowering:1 hodgkin:1 striking:1 throughout:1 electronic:2 residence:2 oscillation:1 draw:1 ob:1 dy:1 rpm:1 eral:1 layer:1 played:1 display:2 strength:2 occur:1 optic:1 huxley:1 segev:2 nhb:2 fourier:2 hind:2 transferred:1 developing:1 membrane:1 ate:1 describes:2 em:2 beneficial:1 unity:1 suppressed:1 rev:8 making:1 aihara:2 dv:1 explained:2 erm:1 xo:2 taken:1 equation:9 agree:2 describing:2 discus:1 turn:1 denker:1 observe:1 fluctuating:1 spectral:5 appropriate:1 ocean:1 schmidt:2 alternative:1 rej:1 top:1 remaining:2 include:2 cf:1 graded:1 move:1 already:2 quantity:1 spike:4 looked:1 realized:2 occurs:3 added:1 primary:1 enhances:1 sci:2 berlin:1 procesing:1 degrade:1 toward:1 rom:1 modeled:1 relationship:1 ratio:4 ql:1 holding:1 expense:1 lil:1 collective:1 neuron:35 observation:2 finite:2 gas:2 langevin:1 precise:1 dc:1 gc:1 intensity:4 procedings:1 cast:2 optimized:1 raising:1 distinction:1 alternately:1 able:1 beyond:1 dynamical:1 including:1 shifting:1 power:14 event:5 treated:2 force:1 disturbance:1 santucci:2 turning:2 buhmann:3 representing:1 carried:1 coupled:1 embodied:1 moss:16 philadelphia:1 text:1 acknowledgement:1 relative:3 embedded:1 fully:2 expect:1 parseval:1 filtering:1 degree:1 bulb:1 xp:1 plotting:1 kramers:16 cd:1 jung:2 free:2 zc:1 side:3 taking:2 barrier:5 differentiating:1 feedback:1 depth:1 curve:20 valid:1 transition:2 lett:3 cortical:1 sensory:1 qualitatively:1 san:1 bm:1 spano:2 far:1 approximate:2 obtains:1 feat:1 cavity:1 ml:1 equ:1 xi:3 spectrum:7 transfer:2 ca:1 correlated:3 operational:1 symmetry:1 paulus:2 whole:1 noise:68 arise:1 snapp:2 neuronal:3 fig:3 referred:1 slow:1 cerm:1 adiabatic:9 deterministically:1 explicit:1 schulten:3 xl:2 lw:1 tin:1 theorem:2 down:2 decibel:1 physiological:1 dominates:1 derives:1 essential:1 a43:1 illustrates:1 occurring:1 rg:4 simply:1 visual:2 amsterdam:1 kiss:4 bo:2 holland:1 springer:1 corresponds:1 fokker:1 wolf:1 ma:2 conditional:1 bioi:4 replace:1 absence:2 content:3 vemuri:2 except:1 degradation:1 called:3 total:3 experimental:1 latter:3 modulated:6 dept:1 phenomenon:2 ex:1 |
4,321 | 4,910 | Adaptive Market Making via Online Learning
Jacob Abernethy?
Computer Science and Engineering
University of Michigan
[email protected]
Satyen Kale
IBM T. J. Watson Research Center
[email protected]
Abstract
We consider the design of strategies for market making in an exchange. A market
maker generally seeks to profit from the difference between the buy and sell price
of an asset, yet the market maker also takes exposure risk in the event of large price
movements. Profit guarantees for market making strategies have typically required
certain stochastic assumptions on the price fluctuations of the asset in question;
for example, assuming a model in which the price process is mean reverting. We
propose a class of ?spread-based? market making strategies whose performance
can be controlled even under worst-case (adversarial) settings. We prove structural
properties of these strategies which allows us to design a master algorithm which
obtains low regret relative to the best such strategy in hindsight. We run a set of
experiments showing favorable performance on recent real-world stock price data.
1
Introduction
When a trader enters a market, say a stock or commodity market, with the desire to buy or sell a
certain quantity of an asset, how is this trader guaranteed to find a counterparty to agree to transact
at a reasonable price? This is not a problem in a liquid market, with a deep pool of traders ready to
buy or sell at any time, but in a thin market the lack of counterparties can be troublesome. A rushed
trader may even be willing to transact at a worse price in exchange for immediate execution.
This is where a market maker (MM) can be quite useful. A MM is any agent that participates in a
market by offering to both buy and sell the underlying asset at any time. To put it simply, a MM
consistently guarantees liquidity to the marketplace by promising to be a counterparty to any trader.
The act of market making has both potential benefits and risks. For one, the MM bears the risk
of transacting with better-informed traders that may know much more about the movement of the
asset?s price, and in such scenarios the MM can take on a large inventory of shares that it may have
to offload at a worse price. On the positive side, the MM can profit as a result of the bid-ask spread,
the difference between the MM?s buy price and sell price. In other words, if the MM buys 100 shares
of a stock from one trader at a price of p, and then immediately sells 100 shares of stock to another
trader at a price of p + , the MM records a profit of 100 .
This describes the central goal of a profitable market making strategy: minimize the inventory risk
of large movements in the price while simultaneously aiming to benefit from the bid-ask spread.
The MM strategy has a state, which is the current inventory or holdings, receives as input order and
price data, and must decide what quantities and at what prices to offer in the market. In the present
paper we assume that the MM interacts with a continuous double auction via an order book, and the
MM can place both market and limit orders to the order book.
A number of MM strategies have been proposed, and in many cases certain profit/loss guarantees
have been given. But to the best of our knowledge all such guarantees (aside from [4]) have required
?
Work performed while the author was in the CIS Department at the University of Pennsylvania and funded
by a Simons Postdoctoral Fellowship
1
stochastic assumptions on the traders or the sequence of price fluctuations. Often, e.g., one needs to
assume that the underlying price process exhibits a mean reverting behavior to guarantee profit.
In this paper we focus on constructing MM strategies that achieve non-stochastic guarantees on
profit and loss. We begin by proposing a class of market making strategies, parameterized by the
choice of bid-ask spread and liquidity, and we establish a data-dependent expression for the profit
and loss of each strategy at the end of a sequence of price fluctuations. The model we consider, as
well as the aforementioned class of strategies, builds off of the work of Chakraborty and Kearns [4].
In particular, we assume the MM is given an exogenously-specified price time series that is revealed
online. We also assume that the MM is able to make and cancel orders after every price fluctuation.
We extend the prior work [4] by considering the problem of online learning among this parameterized class of strategies. Performance is measured in terms of regret, which is the difference between
the total value of the learner?s algorithm and that of the best strategy in hindsight. While this problem is related to the problem of learning from expert advice, standard algorithms assume that the
experts have no state; i.e. in each round, the cost of following any given expert?s advice is the same
as the cost to that expert. This is not the case for online learning of the bid-ask spread, where the
state, represented by the inventory of each strategy, affects the payoffs. We can prove however that
due to the combinatorial structure of these strategies, one can afford to switch state with bounded
cost. Using these structural properties we prove the following main result of this paper:
Theorem 1 There is an online
learning algorithm, that, under a bounded price volatility assumption
p
(see Defintion 1) has O( T ) regret after T trading periods to the best spread-based strategy.
Experimental simulations of our online learning algorithms with real-world price data suggest that
this approach is quite promising; our algorithm frequently performs nearly as well as the best strategy, and is often superior. Such empirical results provides some evidence that regret minimization
techniques are well-suited for adaptively setting the bid-ask spread.
Related Work Perhaps the most popular model to study market making has been the GlostenMilgrom model [11]. In this setting the market is facilitated by a specialist, a monopolistic market
maker that acts as the middle man for all trades. There has been some work in the Computer Science
literature that has considered the sequential decision problem of the specialist [8, 10], and this work
was extended to look at the more modern order book mechanism [9]. In our model traders interact
directly with an order book, not via a specialist, and the prices are set exogenously as in [4].
Over the past ten years that has been a burst of research within the AI and EconCS community on
the design of prediction markets in which traders can bet on the likelihood of future uncertain events
(like horse races, or elections). Much of this started with a couple of key results of Robin Hanson
[12, 13] who described how to design subsidized prediction markets via the use of proper scoring
rules. The key technique was a method to design an automated market maker, and there has been
much work on facilitating this using mechanisms based on shares (i.e. Arrow-Debreu securities).
There is a medium-sized literature on this topic by now [6, 5, 1, 2] and we mention only a selection.
The key difference between the present paper and the work on designing prediction markets is that
our techniques are solely focused on profit and risk, and not on other issues like price discovery or
information aggregation. Recent work by Della Penna and Reid [19] considered market making as
a the multi-armed bandit problem, and this is a notable exception where profit was the focus.
This ?non-stochastic? approach we take to the market making problem echos many of the ideas
of Cover?s results on Universal Portfolio algorithms [20], an area that has received much followup
work [16, 15, 14, 3, 7] given its robustness to adversarially-chosen price fluctuations. But these
algorithms are of the ?market taking? variety, that is they actively rebalance their portfolio on a
daily basis. Moreover, the goal of the Universal Portfolio is to get low regret with respect to the best
fixed mixture of investments, rather than the best bid-ask spread which is aim of the present work.
2
The Market Execution Framework
We now present our market model formally. We will consider the buying and selling of a single
security, say the stock of Microsoft, over the course of some time interval. We assume that all
events in the market take place at discrete points in time throughout this day. At each time period t a
2
market price pt is announced to the world. In a typical stock exchange this price will be rounded to a
discrete value; historically stock prices were quoted in 18 ?s of a dollar, although now they are quoted
in pennies. We let be the discretization parameter of the exchange, and for simplicity assume
= 1/m for some positive integer m. Now let ? be the set of discrete prices within some feasible
range, ? := { , 2 , 3 , . . . , ( M 1) , M }, where M is some reasonable bound on the largest price.
A trading strategy maintains two state variables at the beginning of every time period t: (a) the
holdings or inventory Ht 2 R, representing the amount of stock that the strategy is long or short
at the beginning of time period t (Ht will be negative if the strategy is short); (b) the cash Ct 2 R
of the strategy, representing the money earned or lost by the investor at that time. Initially we have
C1 = H1 = 0. Note that when Ct < 0 this is not necessarily bad, it simply means the investor has
borrowed money to purchase holdings, often referred to as ?trading on margin?.
Let us now consider the trading mechanism at time t. For simplicity we assume there are two types
of trades that can be executed, and each will change the cash and holdings at the following time
period. By default, set Ht+1
Ht and Ct+1
Ct . Then the trading strategy can execute any
subset of the following two actions:
? Market Order: At time t the posted price is pt and the trader executes a trade of X shares,
with X 2 R. In this case we update the cash as Ct+1
Ct+1 pt X and Ht+1
Ht+1 + X. Note that if X < 0 then this is a short sale in which case the trader?s cash
increases1
? Limit Order: Before time period t, the trader submits a demand schedule Lt : ? ! R+ ,
where it is assumed that Lt (pt 1 ) = 0. For every price p 2 ? with p < pt 1 , the value
Lt (p) is the number of shares the trader would like to buy at a price of p. For every p > pt 1
the value Lt (p) is the number of shares the trader would like to sell at a price of p. One
should interpret a limit order in terms of ?posting shares to the order book?: these shares
are up for sale (and/or purchase) but the order will only be executed if the price moves.
In round t the posted price becomes pt and it is assumed that all shares offered at any price
between pt 1 and pt are transacted. More specifically, we have two cases:
? If pt > pt 1 then for each p 2 ? with pt 1 < p ? pt we update Ct+1
Ct+1 +
pLt (p) and Ht+1
Ht+1 Lt (p);
? Else if pt < pt 1 then for each p 2 ? with pt ? p < pt 1 we update Ct+1
Ct+1 pLt (p) and Ht+1
Ht+1 + Lt (p).
It is worth noting market orders are quite different from limit orders. A limit order is a passive action
in the market, the trader simply states that he would be willing to trade a number of shares at a range
of different prices. But if the market does not move then no transactions occur. The market order is a
much more direct action to take, the transaction is guaranteed to execute at the current market price.
The market order has the downside that the trader does not get to specify the price at which he would
like to trade, contrary to the limit order. Roughly speaking, an MM strategy will generally interact
with the market via limit orders, since the MM is simply hoping to profit from liquidity provision.
But the MM may at times have to place market orders to balance inventory to control risk.
We include one more piece of notation, the value of the strategy?s portfolio Vt+1 at the end of time
period t, which can be defined explicitly in terms of the cash, holdings, and current market price:
Vt+1 := Ct+1 + pt Ht+1 . In other words, Vt+1 is the amount of cash the strategy would have if it
liquidated all holdings at the current market price.
Assumptions of our model. In the described framework we make several simplifying assumptions
on the trading execution mechanism, which we note here.
(1) The trader pays neither transaction fees nor borrowing costs when his cash balance is negative.
(2) Market orders are executed at exactly the posted market price, without ?slippage? of any kind.
This suggests that the market is very liquid relative to the actions of the MM.
(3) The market allows the buying and selling of fractional shares.
1
Technically speaking, a brokerage firm won?t give the short-seller the cash to spend since this money will
be used to backup losses when the short position is closed. But for the purpose of accounting it is perfectly
reasonably to record cash in this way, assuming that the strategy ends up holdings at 0.
3
(4) The price sequence is ?exogenously? determined, meaning that the trades we make do not affect
the current and future prices. This assumption has been made in previous results [4] and it is perhaps
quite strong, especially if the MM is providing the bulk of the liquidity. We leave it for future work
to consider the setting with a non-exogenous price process.
(5) Unexecuted limited orders are cancelled before the next period. That is, for any p not lying
between pt 1 to pt it is assumed that the Lt (p) untransacted shares at price p are removed from the
order book. This is just notational convenience: the MM can resubmit these shares via Lt+1 .
3
Spread-based Strategies
In this section we present a class of simple market making strategies which we refer to as spreadbased strategies since they maintain a fixed bid-ask spread throughout. We then prove some structural properties on this class of strategies. We only give proof sketches for lack of space; all proofs
can be found in an appendix in the supplementary material.
3.1
Spread-based strategies.
We consider market making strategies parameterized by a window size b 2 { , 2 , . . . , B}, where
B is a multiple of . Before round t, the strategy S(b) selects a window of size b, viz. [at , at + b],
starting with a1 = p1 . For some fixed liquidity density parameter ?, it submits a buy order of ?
shares at every price p 2 ? such that p < at and a sell order ? shares at every price p 2 ? such that
p > at + b. Depending on the price in the trading period pt , the strategy adjusts the next window by
the smallest amount necessary to include pt .
Algorithm 1 Spread-Based Strategy S(b)
1: Receive parameters b > 0, liquidity density ? > 0, inital price p1 as input. Initialize a1 := p1 .
2: for t = 1, 2, . . . , T do
3:
Observe market price pt
4:
If pt < at then at+1
pt
5:
Else If pt > at + b then at+1
pt b
6:
Else at+1
at
7:
Submit limit order Lt+1 : Lt+1 (p) = 0 if p 2 [at+1 , at+1 + b], else Lt+1 (p) = ?.
8: end for
The intuition behind a spread-based strategy is that the MM waits for the price to deviate in such a
way that it leaves the window [at , at + b]. Let?s say the price suddenly drops below at and we get
pt = at k for some positive integer k such that k < b. As soon as this happens some transactions
occur and the MM now has holdings of k? shares. That is, the MM will have purchased ? shares at
each of the prices at
, at 2 , . . . , at k . On the following round the MM updates his limit
order Lt+1 to offer to sell ? shares at each of the price levels at + b (k 1) , at + b (k 2) , . . ..
This gives a natural matching between shares that were bought and shares that are offered for sale,
with the sale price being exactly b higher than the purchased price. If, at a later time t0 > t, the price
rises so that pt0 at + b + then all shares bought previously are sold at a profit of kb?.
We now give a very useful lemma, that essentially shows that we can calculate the profit and loss
of a spread-based strategy on two factors: (a) how much the spread window moves throughout the
trading period, and (b) how far away the final price is from the initial price. A sketch of the proof is
provided, but the complete version is in the Appendix.
Lemma 1 The value of the portfolio of S(b) at time T can be bounded as
VT +1
?
T
X
b
|at+1
2
t=1
at |
(|aT +1
a1 | + b)
2
!
P ROOF :[Sketch] The proof of this lemma is quite similar to the proof of Theorem 2.1 in [4]. The
main idea is given in the intuitive explanation above: we can match pairs of shares that are bought
4
and sold at prices that are b apart, thus registering a profit of b for each such pair. We can relate
these matched pairs to the at ?s, and the unmatched stock transactions to the difference |aT +1 a1 |,
yielding the stated bound. 2
In other words, the risk taken by all strategies is roughly the same ( 12 |pT +1 p1 |2 up to an additive constant in the quadratic term). But the revenue of the spread-based strategy scales with two
quantities: the size of the window b but also the total movement of the window. This raises an interesting tradeoff in setting the b parameter, since we would like to make as much as possible on the
movement of the window, but by increasing b the window will get ?pushed around? a lot less by the
fluctuating price.
We now make some convenient normalization. Since for every unit price change, the strategies trade
?/ shares, in the rest of the paper, without loss of generality, we may assume that ? = 1 and = 1
(by appropriately changing the unit of currency). The regret bounds for general ? and scale up by
a factor of ? .
3.2
Structural properties of spread-based strategies.
It is useful to prove certain properties about the proposed spread-based strategies.
Lemma 2 Consider any two strategies S(b) and S(b0 ) with b0 < b. Let [a0t , a0t + b0 ] and [at , at + b]
denote the intervals chosen by S(b) and S(b0 ) at time t respectively. Then for all t, we have [a0t , a0t +
b0 ] ? [at , at + b].
P ROOF :[Sketch] This is easy to prove by induction on t, via a simple case analysis on where pt lies
in relation to the windows [a0t , a0t + b0 ] and [at , at + b]. 2
Lemma 3 For any strategy S(b), its inventory at time t, Ht , equals a1
at .
P ROOF :[Sketch] Again using case analysis on where pt lies in relation to the window [at , at + b],
we can show that Ht + at is an invariant. Thus, Ht + at = H1 + a1 = a1 , and hence Ht = a1 at .
2
The following corollary follows easily:
Corollary 1 For any round t, consider any two strategies S(b) and S(b0 ) with b0 < b, with inventories Ht and Ht0 respectively. Then |Ht Ht0 | ? b b0 .
P ROOF : By Lemma 3 we have |Ht Ht0 | = |a1 a01 +a0t at | ? b b0 , since [a01 , a01 +b0 ] ? [a1 , a1 +b]
and by Lemma 2 [a0t , a0t + b0 ] ? [at , at + b]. 2
Definition 1 ( -bounded volatility) A price sequence p1 , p2 , . . . , pT is said to have
volatility if for all t 2, we have |pt pt 1 | ? .
-bounded
We assume from now that the price sequence has -bounded volatility. Suppose now that we have
a set B of N window sizes b, all bounded by B. In the rest of the paper, all vectors are in RN with
coordinates indexed by b 2 B. For every b 2 B, at the end of time period t, let its inventory be
Ht+1 (b), cash value be Ct+1 (b), and total value be Vt+1 (b). These quantities define the vectors
Ht+1 , Ct+1 and Vt+1 . The following lemma shows that the change in the total value of different
strategies in any round is similar.
Lemma 4 Define G = 2 B + 2 . In round t, H = minb2B {Ht (b)}. Then for any strategy S(b),
we have
|(Vt+1 (b) Vt (b)) (H(pt pt 1 ))| ? G.
Thus, for any two window sizes b and b0 , we have
|(Vt+1 (b)
Vt (b))
(Vt+1 (b0 )
Vt (b0 ))| ? 2G.
P ROOF :[Sketch] Since |pt pt 1 | ? , each strategy trades at most shares, at prices between
pt 1 and pt . Next, by Corollary 1, for any strategy |Ht (b) H| ? B. Using these bounds, and the
definitions of the total value, some calculations give the stated bounds. 2
5
4
A low regret meta-algorithm
Recall that we have a set B of N window sizes b, all bounded by B. We want to design a low-regret
algorithm that achieves almost as much payoff as that of the best strategy S(b) for b 2 B.
Consider the following meta-algorithm. Treat every strategy S(b) as an expert and run a regret minimizing algorithm for learning with expert advice (such as Multiplicative Weights [18] or FollowThe-Perturbed-Leader [17]). The distributions generated by the regret minimizing algorithm are
treated as mixing weights for the different strategies, essentially executing each strategy scaled by
its current weight. In each round, the meta-algorithm restores the inventory of each strategy to the
correct state by additionally buying or selling enough shares so that its inventory is exactly what
it would have been had it run the different strategies with their present weights throughout. The
specific algorithm is given below.
Algorithm 2 Low regret meta-algorithm
1: Run every strategy S(b) in parallel so that at the end of each time period t, all trades made by
the strategies and the vectors Ht+1 , Ct+1 and Vt+1 2 RN can be computed.
2: Start a regret-minimizing algorithm A for learning from expert advice with one expert corresponding to each strategy S(b) for b 2 B. Let the distribution over strategies generated by A at
time t be wt .
3: for t = 1, 2, . . . , T do
4:
Execute any market orders from the previous period at the current market price pt so that the
inventory now equals Ht ? wt . The cash value changes by (Ht ? (wt wt 1 ))pt .
5:
Execute any limit orders from the previous period: a wt weighted combination of the limit
orders of the strategies S(b). The holdings change to Ht+1 ? wt , and the cash value changes
by (Ct+1 Ct ) ? wt .
6:
For each strategy S(b) for b 2 B, set its payoff in round t to be Vt+1 (b) Vt (b) and send
these payoffs to A.
7:
Obtain the updated distribution wt+1 from A.
8:
Place a market order to buy Ht+1 ?(wt+1 wt ) shares in the next period, and a wt+1 weighted
combination of the limit orders of the strategies S(b).
9: end for
We now prove the following bound on the regret of the algorithm based on the regret of the underlying algorithm A. Recall from Lemma 4 the definition of G := 2 B + 2 .
Theorem 2 Assume that the price sequence has
algorithm is bounded by
-bounded volatity. The regret of the meta-
T
Regret(A) +
GX
kwt
2 t=1
wt+1 k1 .
PT
P ROOF : The regret bound for A implies that t=1 (Vt+1 Vt ) ? wt
maxb2B VT (b) Regret(A).
PT
Lemma 5 shows that the final total value of the meta-algorithm is at least t=1 (Vt+1 Vt ) ? wt
PT
G
wt+1 k1 . Thus, the regret of the algorithm is bounded as stated. 2
t=1 kwt
2
Lemma 5 In round t, the change in total value of the meta-algorithm equals
(Vt+1
Furthermore, |Ht ? (wt
wt
Vt ) ? wt + Ht ? (wt
1 )(pt 1
pt )| ?
G
2 kwt
wt
1 )(pt 1
pt ).
wt+1 k1 .
P ROOF :[Sketch] The expression for the change in the total value of the meta-algorithm is a simple
calculation using the definitions. The second bound is obtained by noting that all the Ht (b)?s are
within B of each other by Corollary 1, and thus |Ht ? (wt wt 1 )| ? Bkwt wt 1 k1 , and
|pt 1 pt | ? by the bounded volatility assumption. 2
6
4.1
A low regret algorithm based on Mutiplicative Weights
Now we give a low regret algorithm based on the classic Multiplicative Weights (MW) algorithm [18]. Call this algorithm MMMW (Market Making using Multiplicative Weights).
The algorithm takes parameters ?t , for t = 1, 2, . . . , T . It starts by initializing weights w1 (b) = 1/N
for every b 2 B. In round t, the algorithm updates the weights using the rule
wt+1 (b) := wt (b) exp(?t (Vt+1 (b)
Vt (b)))/Zt ,
for every b 2 B, where Zt is the normalization constant to make wt+1 a distribution.
Using Theorem 2, we can give the following bound on the regret of MMMW:
?q
log(N )
1
Theorem 3 Suppose we set ?t = 2G min
, 1 , for t = 1, 2, . . . , T . Then MMMW has
t
p
regret bounded by 13G log(N )T .
P ROOF :[Sketch] By Theorem 2, we need to bound kwt+1 wt k1 . The multiplicative update rule,
wt+1 (b) = wt (b) exp(?t (Vt+1 (b) Vt (b)))/Zt , and the fact that by Lemma 4, the range of the
entries of Vt+1 Vt is bounded by 2G implies that kwt+1 wt k1 ? 4?t G. Standard analysis for
the regret of the MW algorithm then gives the stated regret bound for MMMW. 2
4.2
A low regret algorithm based on Follow-The-Perturbed-Leader
Now we give a low regret algorithm based on the Follow-The-Perturbed-Leader (FPL) algorithm [17]. Call this algorithm MMFPL (Market Making using Follow-The-Perturbed-Leader). We
actually use a deterministic version of the algorithm which has the same regret bound.
The algorithm requires a parameter ?. For every b 2 B, let p(b) be a sample from the exponential
distribution with mean 1/?. The distribution wt is then set to be the distribution of the ?perturbed
leader?, i.e.
wt (b) = Pr[Vt (b) + p(b)
Vt (b0 ) + p(b0 ) 8 b0 2 B].
p
Using Theorem 2, we can give the following bound on the regret of MMFPL:
q
p
log(N )
1
Theorem 4 Choose ? = 2G
. Then the regret of MMFPL is bounded by 7G log(N )T .
T
P ROOF :[Sketch] Again we need to bound kwt+1 wt k1 . Kalai and Vempala [17] show that in the
randomized FPL algorithm, probability that the leader changes from round t to t + 1 is bounded by
2?G. This implies that kwt+1 wt k1 ? 4?G. Standard analysis for the regret of the FPL algorithm
then gives the stated regret bound for MMFPL. 2
5
Experiments
We conducted experiments with stock price data obtained from http://www.netfonds.no/.
We downloaded data for the following stocks: MSFT, HPQ and WMT. The data consists of trades
made throughout a given date in chronological order. We obtained data for these stocks for each of
the 5 days in the range May 6-10, 2013. The number of trades ranged from roughly 7,000 to 38,000.
The quoted prices are rounded to the nearest cent. Our spread-based strategies operate at the level of
a cent: i.e. the windows are specified in terms of cents, and the buy/sell orders are set to 1 share per
cent outside the window. The class of spread-based strategies we used in our experiments correspond
to the following set of window sizes, quoted in cents: B = {1, 2, 3, 4, 5, 10, 20, 40, 80, 100}, so that
N = 10 and B = 100.
We implemented MMMW, MMFPL, simple Follow-The-Leader2 (FTL), and simple uniform averaging over all strategies. We compared their performance to the best strategy in hindsight. For
MMFPL, wt was approximated by averaging 100 independently drawn initial perturbations.
2
This algorithm simply chooses the best strategy in each round based on past performance without perturbations.
7
Symbol
HPQ
HPQ
HPQ
HPQ
HPQ
MSFT
MSFT
MSFT
MSFT
MSFT
WMT
WMT
WMT
WMT
WMT
Date
05/06/2013
05/07/2013
05/08/2013
05/09/2013
05/10/2013
05/06/2013
05/07/2013
05/08/2013
05/09/2013
05/10/2013
05/06/2013
05/07/2013
05/08/2013
05/09/2013
05/10/2013
T
7128
13194
12016
14804
14005
29481
34017
38664
34386
27641
8887
11309
12966
10431
9567
Best
668.00
558.00
186.00
1058.00
512.00
1072.00
1260.00
2074.00
1813.00
1236.00
929.00
1333.00
1372.00
2415.00
1150.00
MMMW
370.07
620.18
340.11
890.99
638.53
1062.65
1157.38
2064.83
1802.91
1250.27
694.48
579.88
1300.47
2329.78
1001.31
MMFPL
433.99
-41.54
-568.04
327.05
-446.42
-1547.01
1048.46
1669.30
1534.68
556.08
760.70
995.43
832.80
1882.90
7.03
FTL
638.00
19.00
-242.00
214.00
-554.00
-1300.00
1247.00
2074.00
1811.00
590.00
785.00
918.00
974.00
1991.00
209.00
Uniform
301.10
100.80
-719.80
591.40
345.60
542.60
63.80
939.10
656.10
750.90
235.20
535.40
926.40
1654.10
707.70
Table 1: Final performance of various algorithms in cents. Bolded values indicate best performance.
Italicized values indicate runs where the MMMW algorithm beat the best in hindsight.
Figure 1: Performance of various algorithms and strategies for HPQ on May 8 and 9, 2013. For
clarity, the total value every 100 periods is shown. Top row: On May 8, MMMW outperforms the
best strategy, and on May 9 the reverse happens. Bottom row: performance of different strategies.
On May 8, b = 100 performs best, while on May 9, b = 40 performs best.
Experimentally, having slightly
?q larger learning rates seemed to help. For MMMW, we used the
log(N ) 1
specification ?t = min
, Gt , where Gt = max? ?t,b,b0 2B |V? (b) V? (b0 )|, and for
t
q
)
MMFPL, we used the specification ? = log(N
. These specifications ensures that the theory goes
p T
through and the regret is bounded by O( T ) as before.
Table 5 shows the performance of the algorithms in the 15 runs (3 stocks times 5 days). In all the
runs, the MMMW algorithm performed nearly as well as the best strategy, at times even outperforming it. MMFPL didn?t perform as well however. As an illustration of how closely MMMW tracks
the best performance achievable using the spread-based strategies in the class, in Figure 5 we show
the performance of all algorithms for 2 consecutive trading days, May 8 and 9, 2013, for the stock
HPQ. We also show the performance of different strategies on these two days - it can be seen that the
best strategy differs, thus motivating the need for an adaptive learning algorithm.
8
References
[1] J. Abernethy, Y. Chen, and J. Wortman Vaughan. An optimization-based framework for automated market-making. In Proceedings of the 12th ACM Conference on Electronic Commerce,
pages 297?306, 2011.
[2] S. Agrawal, E. Delag, M. Peters, Z. Wang, and Y. Ye. A unified framework for dynamic
prediction market design. Operations research, 59(3):550?568, 2011.
[3] A. Blum and A. Kalai. Universal portfolios with and without transaction costs. Machine
Learning, 35(3):193?205, 1999.
[4] T. Chakraborty and M. Kearns. Market making and mean reversion. In Proceedings of the 12th
ACM conference on Electronic commerce, pages 307?314. ACM, 2011.
[5] Y. Chen and D. M. Pennock. A utility framework for bounded-loss market makers. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 49?56, 2007.
[6] Y. Chen and J. Wortman Vaughan. A new understanding of prediction markets via no-regret
learning. In Proceedings of the 11th ACM Conference on Electronic Commerce, pages 189?
198, 2010.
[7] T. M. Cover and E. Ordentlich. Universal portfolios with side information. IEEE Transactions
on Information Theory, 42(2):348?363, 1996.
[8] S. Das. A learning market-maker in the glosten?milgrom model. Quantitative Finance,
5(2):169?180, 2005.
[9] S. Das. The effects of market-making on price dynamics. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, pages 887?894,
2008.
[10] S. Das and M. Magdon-Ismail. Adapting to a market shock: Optimal sequential marketmaking. In Proceedings of the 21th Annual Conference on Neural Information Processing
Systems, pages 361?368, 2008.
[11] L. R. Glosten and P. R. Milgrom. Bid, ask and transaction prices in a specialist market with
heterogeneously informed traders. Journal of financial economics, 14(1):71?100, 1985.
[12] R. Hanson. Combinatorial information market design.
5(1):105?119, 2003.
Information Systems Frontiers,
[13] R. Hanson. Logarithmic market scoring rules for modular combinatorial information aggregation. Journal of Prediction Markets, 1(1):3?15, 2007.
[14] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex
optimization. In Learning Theory, pages 499?513. Springer, 2006.
[15] D. P. Helmbold, R. E. Schapire, Y. Singer, and M. K. Warmuth. On-line portfolio selection
using multiplicative updates. Mathematical Finance, 8(4):325?347, 1998.
[16] A. T. Kalai and S. Vempala. Efficient algorithms for universal portfolios. The Journal of
Machine Learning Research, 3:423?440, 2003.
[17] A. T. Kalai and S. Vempala. Efficient algorithms for online decision problems. J. Comput.
Syst. Sci., 71(3):291?307, 2005.
[18] N. Littlestone and M. K. Warmuth.
108(2):212?261, 1994.
The weighted majority algorithm.
Inf. Comput.,
[19] N. Della Penna and M. D. Reid. Bandit market makers. arXiv preprint arXiv:1112.0076, 2011.
[20] T. M. Cover. Universal portfolios. Mathematical Finance, 1(1):1?29, January 1991.
9
| 4910 |@word version:2 middle:1 achievable:1 chakraborty:2 heterogeneously:1 willing:2 seek:1 simulation:1 jacob:1 simplifying:1 accounting:1 profit:14 mention:1 initial:2 offload:1 series:1 liquid:2 pt0:1 offering:1 past:2 outperforms:1 current:7 com:1 discretization:1 yet:1 must:1 additive:1 hoping:1 drop:1 update:7 aside:1 intelligence:1 leaf:1 warmuth:2 beginning:2 short:5 record:2 provides:1 gx:1 mathematical:2 burst:1 registering:1 direct:1 reversion:1 prove:7 consists:1 market:67 roughly:3 p1:5 frequently:1 nor:1 multi:1 behavior:1 buying:3 election:1 armed:1 window:17 considering:1 increasing:1 becomes:1 begin:1 provided:1 underlying:3 bounded:18 moreover:1 notation:1 medium:1 didn:1 what:3 matched:1 kind:1 informed:2 proposing:1 unified:1 hindsight:4 guarantee:6 quantitative:1 commodity:1 every:14 act:2 chronological:1 finance:3 exactly:3 scaled:1 sale:4 control:1 unit:2 reid:2 positive:3 before:4 engineering:1 treat:1 limit:12 aiming:1 troublesome:1 fluctuation:5 solely:1 suggests:1 limited:1 range:4 commerce:3 investment:1 regret:35 lost:1 differs:1 area:1 empirical:1 universal:6 adapting:1 matching:1 convenient:1 word:3 wait:1 suggest:1 submits:2 get:4 convenience:1 selection:2 put:1 risk:7 vaughan:2 www:1 deterministic:1 center:1 send:1 kale:2 exposure:1 exogenously:3 starting:1 independently:1 focused:1 go:1 economics:1 simplicity:2 convex:1 immediately:1 helmbold:1 rule:4 adjusts:1 his:2 financial:1 classic:1 coordinate:1 autonomous:1 profitable:1 updated:1 pt:51 suppose:2 designing:1 approximated:1 slippage:1 bottom:1 preprint:1 enters:1 initializing:1 worst:1 calculate:1 wang:1 ensures:1 earned:1 movement:5 trade:11 removed:1 intuition:1 seller:1 dynamic:2 transact:2 raise:1 technically:1 learner:1 basis:1 selling:3 easily:1 joint:1 stock:14 represented:1 various:2 artificial:1 marketplace:1 horse:1 outside:1 abernethy:2 firm:1 whose:1 quite:5 spend:1 supplementary:1 larger:1 say:3 modular:1 satyen:1 echo:1 final:3 online:8 sequence:6 agrawal:1 propose:1 date:2 mixing:1 achieve:1 ismail:1 intuitive:1 double:1 leave:1 executing:1 volatility:5 depending:1 help:1 measured:1 nearest:1 b0:20 received:1 borrowed:1 p2:1 strong:1 implemented:1 trading:9 implies:3 indicate:2 closely:1 correct:1 stochastic:4 kb:1 material:1 exchange:4 frontier:1 mm:26 lying:1 around:1 considered:2 exp:2 achieves:1 consecutive:1 smallest:1 purpose:1 favorable:1 combinatorial:3 maker:8 largest:1 weighted:3 minimization:1 aim:1 rather:1 kalai:5 cash:12 bet:1 corollary:4 focus:2 viz:1 notational:1 consistently:1 likelihood:1 adversarial:1 dollar:1 a01:3 dependent:1 typically:1 initially:1 borrowing:1 bandit:2 relation:2 selects:1 issue:1 aforementioned:1 among:1 restores:1 initialize:1 equal:3 having:1 sell:10 adversarially:1 look:1 cancel:1 nearly:2 thin:1 future:3 purchase:2 modern:1 simultaneously:1 kwt:7 roof:9 microsoft:1 maintain:1 mixture:1 yielding:1 behind:1 daily:1 necessary:1 indexed:1 littlestone:1 uncertain:1 downside:1 cover:3 cost:5 subset:1 entry:1 trader:20 uniform:2 wortman:2 conducted:1 motivating:1 perturbed:5 chooses:1 adaptively:1 density:2 international:1 randomized:1 participates:1 off:1 pool:1 rounded:2 w1:1 again:2 central:1 choose:1 unmatched:1 worse:2 book:6 expert:8 actively:1 syst:1 potential:1 notable:1 explicitly:1 race:1 piece:1 multiplicative:5 performed:2 h1:2 closed:1 exogenous:1 hazan:1 later:1 lot:1 start:2 aggregation:2 maintains:1 investor:2 parallel:1 simon:1 minimize:1 who:1 bolded:1 correspond:1 worth:1 asset:5 executes:1 penna:2 definition:4 proof:5 couple:1 popular:1 ask:8 recall:2 knowledge:1 fractional:1 provision:1 schedule:1 actually:1 higher:1 day:5 follow:4 specify:1 execute:4 generality:1 furthermore:1 just:1 sketch:9 receives:1 lack:2 perhaps:2 effect:1 ye:1 ranged:1 hence:1 round:13 won:1 complete:1 performs:3 passive:1 auction:1 meaning:1 superior:1 extend:1 he:2 interpret:1 refer:1 ai:1 inital:1 rd:1 portfolio:10 funded:1 had:1 wmt:6 specification:3 money:3 gt:2 recent:2 followthe:1 inf:1 apart:1 reverse:1 scenario:1 certain:4 meta:8 outperforming:1 watson:1 vt:30 scoring:2 seen:1 period:16 multiple:1 currency:1 debreu:1 match:1 calculation:2 offer:2 long:1 a1:11 controlled:1 prediction:6 essentially:2 arxiv:2 normalization:2 agarwal:1 c1:1 receive:1 fellowship:1 want:1 ftl:2 interval:2 else:4 appropriately:1 rest:2 operate:1 pennock:1 contrary:1 integer:2 bought:3 structural:4 mw:2 noting:2 call:2 revealed:1 easy:1 enough:1 automated:2 bid:8 affect:2 switch:1 variety:1 followup:1 pennsylvania:1 perfectly:1 idea:2 tradeoff:1 t0:1 expression:2 a0t:9 utility:1 peter:1 speaking:2 afford:1 action:4 deep:1 generally:2 useful:3 amount:3 ten:1 http:1 schapire:1 per:1 track:1 bulk:1 discrete:3 key:3 blum:1 drawn:1 changing:1 clarity:1 neither:1 ht:31 shock:1 ht0:3 year:1 run:7 facilitated:1 parameterized:3 master:1 uncertainty:1 place:4 throughout:5 reasonable:2 decide:1 almost:1 electronic:3 decision:2 appendix:2 announced:1 fee:1 pushed:1 bound:15 ct:16 pay:1 guaranteed:2 quadratic:1 annual:1 occur:2 min:2 vempala:3 department:1 combination:2 plt:2 describes:1 slightly:1 making:17 happens:2 invariant:1 pr:1 taken:1 agree:1 previously:1 mechanism:4 reverting:2 know:1 singer:1 milgrom:2 end:7 umich:1 operation:1 magdon:1 observe:1 away:1 fluctuating:1 cancelled:1 specialist:4 robustness:1 cent:6 top:1 include:2 k1:8 build:1 establish:1 especially:1 suddenly:1 purchased:2 move:3 question:1 quantity:4 strategy:75 interacts:1 said:1 exhibit:1 sci:1 majority:1 topic:1 italicized:1 induction:1 assuming:2 illustration:1 providing:1 balance:2 minimizing:3 executed:3 holding:9 relate:1 negative:2 rise:1 stated:5 design:8 proper:1 zt:3 perform:1 sold:2 beat:1 immediate:1 payoff:4 defintion:1 extended:1 january:1 rn:2 jabernet:1 perturbation:2 community:1 pair:3 required:2 specified:2 brokerage:1 hanson:3 security:2 able:1 below:2 max:1 transacted:1 explanation:1 event:3 natural:1 treated:1 msft:6 representing:2 historically:1 started:1 ready:1 fpl:3 deviate:1 prior:1 literature:2 discovery:1 understanding:1 relative:2 loss:7 multiagent:1 bear:1 interesting:1 revenue:1 downloaded:1 agent:2 offered:2 share:28 ibm:2 row:2 course:1 soon:1 side:2 taking:1 penny:1 benefit:2 liquidity:6 default:1 world:3 ordentlich:1 seemed:1 author:1 made:3 adaptive:2 far:1 transaction:8 obtains:1 buy:10 assumed:3 quoted:4 leader:6 postdoctoral:1 continuous:1 robin:1 additionally:1 promising:2 hpq:8 reasonably:1 table:2 interact:2 inventory:12 necessarily:1 posted:3 constructing:1 submit:1 da:3 spread:21 main:2 arrow:1 backup:1 facilitating:1 advice:4 referred:1 position:1 exponential:1 comput:2 lie:2 posting:1 theorem:8 bad:1 specific:1 showing:1 symbol:1 monopolistic:1 evidence:1 sequential:2 ci:1 execution:3 margin:1 demand:1 chen:3 suited:1 michigan:1 lt:12 simply:5 logarithmic:2 desire:1 springer:1 rushed:1 acm:4 goal:2 sized:1 price:74 man:1 feasible:1 change:9 experimentally:1 typical:1 specifically:1 determined:1 wt:36 averaging:2 kearns:2 lemma:13 total:9 experimental:1 exception:1 formally:1 glosten:2 della:2 |
4,322 | 4,911 | Submodular Optimization with Submodular Cover
and Submodular Knapsack Constraints
Jeff Bilmes
Department of Electrical Engineering
University of Washington
[email protected]
Rishabh Iyer
Department of Electrical Engineering
University of Washington
[email protected]
Abstract
We investigate two new optimization problems ? minimizing a submodular
function subject to a submodular lower bound constraint (submodular cover)
and maximizing a submodular function subject to a submodular upper bound
constraint (submodular knapsack). We are motivated by a number of real-world
applications in machine learning including sensor placement and data subset
selection, which require maximizing a certain submodular function (like coverage
or diversity) while simultaneously minimizing another (like cooperative cost).
These problems are often posed as minimizing the difference between submodular
functions [9, 25] which is in the worst case inapproximable. We show, however,
that by phrasing these problems as constrained optimization, which is more natural
for many applications, we achieve a number of bounded approximation guarantees.
We also show that both these problems are closely related and an approximation
algorithm solving one can be used to obtain an approximation guarantee for
the other. We provide hardness results for both problems thus showing that
our approximation factors are tight up to log-factors. Finally, we empirically
demonstrate the performance and good scalability properties of our algorithms.
1
Introduction
A set function f : 2V ? R is said to be submodular [4] if for all subsets S, T ? V , it holds that
f (S) + f (T ) ? f (S ? T ) + f (S ? T ). Defining f (j|S) , f (S ? j) ? f (S) as the gain of j ? V
in the context of S ? V , then f is submodular if and only if f (j|S) ? f (j|T ) for all S ? T and
j?
/ T . The function f is monotone iff f (j|S) ? 0, ?j ?
/ S, S ? V . For convenience, we assume
the ground set is V = {1, 2, ? ? ? , n}. While general set function optimization is often intractable,
many forms of submodular function optimization can be solved near optimally or even optimally
in certain cases. Submodularity, moreover, is inherent in a large class of real-world applications,
particularly in machine learning, therefore making them extremely useful in practice.
In this paper, we study a new class of discrete optimization problems that have the following form:
Problem 1 (SCSC): min{f (X) | g(X) ? c},
and
Problem 2 (SCSK): max{g(X) | f (X) ? b},
where f and g are monotone non-decreasing submodular functions that also, w.l.o.g., are normalized
(f (?) = g(?) = 0)1 , and where b and c refer to budget and cover parameters respectively. The
corresponding constraints are called the submodular cover [29] and submodular knapsack [1]
respectively and hence we refer to Problem 1 as Submodular Cost Submodular Cover (henceforth
SCSC) and Problem 2 as Submodular Cost Submodular Knapsack (henceforth SCSK). Our motivation
stems from an interesting class of problems that require minimizing a certain submodular function
f while simultaneously maximizing another submodular function g. We shall see that these naturally
1
A monotone non-decreasing normalized (f (?) = 0) submodular function is called a polymatroid function.
1
occur in applications like sensor placement, data subset selection, and many other machine learning
applications. A standard approach used in literature [9, 25, 15] has been to transform these problems
into minimizing the difference between submodular functions (also called DS optimization):
Problem 0: min f (X) ? g(X) .
(1)
X?V
While a number of heuristics are available for solving Problem 0, in the worst-case it is NP-hard
and inapproximable [9], even when f and g are monotone. Although an exact branch and bound
algorithm has been provided for this problem [15], its complexity can be exponential in the worst case.
On the other hand, in many applications, one of the submodular functions naturally serves as part of a
constraint. For example, we might have a budget on a cooperative cost, in which case Problems 1 and
2 become applicable. The utility of Problems 1 and 2 become apparent when we consider how they
occur in real-world applications and how they subsume a number of important optimization problems.
Sensor Placement and Feature Selection: Often, the problem of choosing sensor locations can
be modeled [19, 9] by maximizing the mutual information between the chosen variables A and the
unchosen set V \A (i.e.,f (A) = I(XA ; XV \A )). Alternatively, we may wish to maximize the mutual
information between a set of chosen sensors XA and a quantity of interest C (i.e., f (A) = I(XA ; C))
assuming that the set of features XA are conditionally independent given C [19, 9]. Both these
functions are submodular. Since there are costs involved, we want to simultaneously minimize the
cost g(A). Often this cost is submodular [19, 9]. For example, there is typically a discount when
purchasing sensors in bulk (economies of scale). This then becomes a form of either Problem 1 or 2.
Data subset selection: A data subset selection problem in speech and NLP involves finding a limited
vocabulary which simultaneously has a large coverage. This is particularly useful, for example in
speech recognition and machine translation, where the complexity of the algorithm is determined
by the vocabulary size. The motivation for this problem is to find the subset of training examples
which will facilitate evaluation of prototype systems [23]. Often the objective functions encouraging
small vocabulary subsets and large acoustic spans are submodular [23, 20] and hence this problem
can naturally be cast as an instance of Problems 1 and 2.
Privacy Preserving Communication: Given a set of random variables X1 , ? ? ? , Xn , denote I as
an information source, and P as private information that should be filtered out. Then one way
of formulating the problem of choosing a information containing but privacy preserving set of
random variables can be posed as instances of Problems 1 and 2, with f (A) = H(XA |I) and
g(A) = H(XA |P), where H(?|?) is the conditional entropy.
Machine Translation: Another application in machine translation is to choose a subset of training
data that is optimized for given test data set, a problem previously addressed with modular functions
[24]. Defining a submodular function with ground set over the union of training and test sample
inputs V = Vtr ? Vte , we can set f : 2Vtr ? R+ to f (X) = f (X|Vte ), and take g(X) = |X|, and
b ? 0 in Problem 2 to address this problem. We call this the Submodular Span problem.
Apart from the real-world applications above, both Problems 1 and 2 generalize a number of wellstudied discrete optimization problems. For example the Submodular Set Cover problem (henceforth
SSC) [29] occurs as a special case of Problem 1, with f being modular and g is submodular. Similarly
the Submodular Cost Knapsack problem (henceforth SK) [28] is a special case of problem 2 again
when f is modular and g submodular. Both these problems subsume the Set Cover and Max k-Cover
problems [3]. When both f and g are modular, Problems 1 and 2 are called knapsack problems [16].
The following are some of our contributions. We show that Problems 1 and 2 are intimately
connected, in that any approximation algorithm for either problem can be used to provide guarantees
for the other problem as well. We then provide a framework of combinatorial algorithms based
on optimizing, sometimes iteratively, subproblems that are easy to solve. These subproblems
are obtained by computing either upper or lower bound approximations of the cost functions or
constraining functions. We also show that many combinatorial algorithms like the greedy algorithm
for SK [28] and SSC [29] also belong to this framework and provide the first constant-factor
bi-criterion approximation algorithm for SSC [29] and hence the general set cover problem [3]. We
then show how with suitable choices of approximate functions, we can obtain a number of bounded
approximation guarantees and show the hardness for Problems 1 and 2, which in fact match some
of our approximation guarantees. Our guarantees and hardness results depend on the curvature of
the submodular functions [2]. We observe a strong asymmetry in the results that the factors change
2
polynomially based on the curvature of f but only by a constant-factor with the curvature of g, hence
making the SK and SSC much easier compared to SCSK and SCSC.
2
Background and Main Ideas
We first introduce several key concepts used throughout the paper. This paper includes only the
main results and we defer all the proofs and additional discussions to the extended version [11].
\j)
Given a submodular function f , we define the total curvature, ?f as2 : ?f = 1 ? minj?V f (j|V
[2].
f (j)
Intuitively, the curvature 0 ? ?f ? 1 measures the distance
P of f from modularity and ?f = 0
if and only if f is modular (or additive, i.e., f (X) =
j?X f (j)). A number of approximation guarantees in the context of submodular optimization have been refined via the curvature of the submodular function [2, 13, 12].
In this paper, we shall witness the role
of curvature also in determining the approximations and the hardness of problems 1 and 2.
The main idea of this paper is a framework of Algorithm 1: General algorithmic framework to
algorithms based on choosing appropriate sur- address both Problems 1 and 2
rogate functions for f and g to optimize over. 1: for t = 1, 2, ? ? ? , T do
This framework is represented in Algorithm 1. 2:
Choose surrogate functions f?t and g?t for f
We would like to choose surrogate functions f?t
and g respectively, tight at X t?1 .
and g?t such that using them, Problems 1 and 2 3:
Obtain X t as the optimizer of Problem 1 or
become easier. If the algorithm is just single
2 with f?t and g?t instead of f and g.
stage (not iterative), we represent the surrogates 4: end for
as f? and g?. The surrogate functions we consider
in this paper are in the forms of bounds (upper or lower) and approximations.
Modular lower bounds: Akin to convex functions, submodular functions have tight modular lower
bounds. These bounds are related to the subdifferential ?f (Y ) of the submodular set function f at
a set Y ? V [4]. Denote a subgradient at Y by hY ? ?f (Y ). The extreme points of ?f (Y ) may
be computed via a greedy algorithm: Let ? be a permutation of V that assigns the elements in Y
to the first |Y | positions (?(i) ? Y if and only if i ? |Y |). Each such permutation defines a chain
?
with elements S0? = ?, Si? = {?(1), ?(2), . . . , ?(i)} and S|Y
| = Y . This chain defines an extreme
?
?
?
?
point hY of ?f (Y ) with entries hY (?(i))P
= f (Si ) ? f (Si?1 ). Defined as above, h?Y forms a lower
bound of f , tight at Y ? i.e., h?Y (X) = j?X h?Y (j) ? f (X), ?X ? V and h?Y (Y ) = f (Y ).
Modular upper bounds: We can also define superdifferentials ? f (Y ) of a submodular function
[14, 10] at Y . It is possible, moreover, to provide specific supergradients [10, 13] that define the
following two modular upper bounds (when referring either one, we use mfX ):
mfX,1 (Y ) , f (X) ?
X
j?X\Y
f (j|X\j) +
X
f (j|?), mfX,2 (Y ) , f (X) ?
j?Y \X
X
j?X\Y
f (j|V \j) +
X
f (j|X).
j?Y \X
Then mfX,1 (Y ) ? f (Y ) and mfX,2 (Y ) ? f (Y ), ?Y ? V and mfX,1 (X) = mfX,2 (X) = f (X).
MM algorithms using upper/lower bounds: Using the modular upper and lower bounds above in
Algorithm 1, provide a class of Majorization-Minimization (MM) algorithms, akin to the algorithms
proposed in [13] for submodular optimization and in [25, 9] for DS optimization (Problem 0 above).
An appropriate choice of the bounds ensures that the algorithm always improves the objective values
for Problems 1 and 2. In particular, choosing f?t as a modular upper bound of f tight at X t , or g?t as a
modular lower bound of g tight at X t , or both, ensures that the objective value of Problems 1 and
2 always improves at every iteration as long as the corresponding surrogate problem can be solved
exactly. Unfortunately, Problems 1 and 2 are NP-hard even if f or g (or both) are modular [3], and
therefore the surrogate problems themselves cannot be solved exactly. Fortunately, the surrogate
problems are often much easier than the original ones and can admit log or constant-factor guarantees.
In practice, moreover, these factors are almost 1. Furthermore, with a simple modification of the
iterative procedure of Algorithm 1, we can guarantee improvement at every iteration [11]. What
is also fortunate and perhaps surprising, as we show in this paper below, is that unlike the case of
DS optimization (where the problem is inapproximable in general [9]), the constrained forms of
optimization (Problems 1 and 2) do have approximation guarantees.
2
We can assume, w.l.o.g that f (j) > 0, g(j) > 0, ?j ? V
3
Ellipsoidal Approximation: We also consider ellipsoidal approximations (EA) of f . The main
result of Goemans et. al [6] is to provide an algorithm based on approximating the submodular
polyhedron by an ellipsoid. They
p show that for any polymatroid function f , one can compute
an approximation
of
the
form
wf (X) p
for a certain modular weight vector wf ? RV , such
p
?
that wf (X) ? f (X) ? O( n log n) wf (X), ?X ? V . A simple trick then provides a
curvature-dependent
? we define
the ?f -curve-normalized version of f as
approximation [12]
P
follows: f ? (X) , f (X) ? (1 ? ?f ) j?X f (j) /?f . Then, the submodular function f ea (X) =
p
P
?f wf ? (X) + (1 ? ?f ) j?X f (j) satisfies [12]:
ea
f (X) ? f (X) ? O
?
n log n
?
1 + ( n log n ? 1)(1 ? ?f )
f ea (X), ?X ? V
(2)
?
f ea is multiplicatively bounded by f by a factor depending on n and the curvature. We shall use
the result above in providing approximation bounds for Problems 1 and 2. In particular, the surrogate
functions f? or g? in Algorithm 1 can be the ellipsoidal approximations above, and the multiplicative
bounds transform into approximation guarantees for these problems.
3
Relation between SCSC and SCSK
In this section, we show a precise relationship between Problems 1 and 2. From the formulation of Problems 1 and 2, it is clear that these problems are duals of each other. Indeed,
in this section we show that the problems are polynomially transformable into each other.
Algorithm 2: Approx. algorithm for SCSK us- Algorithm 3: Approx. algorithm for SCSC using an approximation algorithm for SCSK.
ing an approximation algorithm for SCSC.
1: Input: An SCSC instance with cover c, an
1: Input: An SCSK instance with budget b, an
[?, ?] approx. algo. for SCSK, & > 0.
[?, ?] approx. algo. for SCSC, & ? [0, 1).
2: Output: [(1 + )?, ?] approx. for SCSC.
2: Output: [(1 ? )?, ?] approx. for SCSK.
?b ? ?.
?c ? V .
3: b ? argminj f (j), X
3: c ? g(V ), X
?c ) > ?b do
?b ) < ?c do
4: while f (X
4: while g(X
5:
c ? (1 ? )c
5:
b ? (1 + )b
6:
X?c ? [?, ?] approx. for SCSC using c.
6:
X?b ? [?, ?] approx. for SCSK using b.
7: end while
7: end while
We first introduce the notion of bicriteria algorithms. An algorithm is a [?, ?] bi-criterion algorithm for
Problem 1 if it is guaranteed to obtain a set X such that f (X) ? ?f (X ? ) (approximate optimality)
and g(X) ? c0 = ?c (approximate feasibility), where X ? is an optimizer of Problem 1. Similarly, an
algorithm is a [?, ?] bi-criterion algorithm for Problem 2 if it is guaranteed to obtain a set X such that
g(X) ? ?g(X ? ) and f (X) ? b0 = ?b, where X ? is the optimizer of Problem 2. In a bi-criterion algorithm for Problems 1 and 2, typically ? ? 1 and ? ? 1. A non-bicriterion algorithm for Problem 1 is
when ? = 1 and a non-bicriterion algorithm for Problem 2 is when ? = 1. Algorithms 2 and 3 provide
the schematics for using an approximation algorithm for one of the problems for solving the other.
Theorem 3.1. Algorithm 2 is guaranteed to find a set X?c which is a [(1 ? )?, ?] approximation
of SCSK in at most log1/(1?) [g(V )/ minj g(j)] calls to the [?, ?] approximate algorithm for SCSC.
Similarly, Algorithm 3 is guaranteed to find a set X?b which is a [(1 + )?, ?] approximation of SCSC
in log1+ [f (V )/ minj f (j)] calls to a [?, ?] approximate algorithm for SCSK.
Theorem 3.1 implies that the complexity of Problems 1 and 2 are identical, and a solution to one of
them provides a solution to the other. Furthermore, as expected, the hardness of Problems 1 and 2 are
also almost identical. When f and g are polymatroid functions, moreover, we can provide bounded approximation guarantees for both problems, as shown in the next section. Alternatively we can also do a
binary search instead of a linear search to transform Problems 1 and 2. This essentially turns the factor
of O(1/) into O(log 1/). Due to lack of space, we defer this discussion to the extended version [11].
4
4
Approximation Algorithms
We consider several algorithms for Problems 1 and 2, which can all be characterized by the framework
of Algorithm 1, using the surrogate functions of the form of upper/lower bounds or approximations.
4.1
Approximation Algorithms for SCSC
We first describe our approximation algorithms designed specifically for SCSC, leaving to ?4.2 the
presentation of our algorithms slated for SCSK. We first investigate a special case, the submodular
set cover (SSC), and then provide two algorithms, one of them (ISSC) is very practical with a weaker
theoretical guarantee, and another one (EASSC) which is slow but has the tightest guarantee.
Submodular Set Cover (SSC): We start by considering a classical special case of SCSC (Problem
1) where f is already a modular function and g is a submodular function. This problem occurs
naturally in a number of problems related to active/online learning [7] and summarization [21, 22].
This problem was first investigated by Wolsey [29], wherein he showed that a simple greedy algorithm
achieves bounded (in fact, log-factor) approximation guarantees. We show that this greedy algorithm
can naturally be viewed in the framework of our Algorithm 1 by choosing appropriate surrogate
functions f?t and g?t . The idea is to use the modular function f as its own surrogate f?t and choose the
function g?t as a modular lower bound of g. Akin to the framework of algorithms in [13], the crucial
factor is the choice of the lower bound (or subgradient). Define the greedy subgradient as:
f (j)
?
?
?(i) ? argmin
j
?
/
S
,
g(S
?
j)
<
c
.
(3)
i?1
i?1
? )
g(j|Si?1
?
?
Once we reach an i where the constraint g(Si?1
? j) < c can no longer be satisfied by any j ?
/ Si?1
,
we choose the remaining elements for ? arbitrarily. Let the corresponding subgradient be referred
to as h? . Then we have the following lemma, which is an extension of [29], and which is a simpler
description of the result stated formally in [11].
Lemma 4.1. The greedy algorithm for SSC [29] can be seen as an instance of Algorithm 1 by
choosing the surrogate function f? as f and g? as h? (with ? defined in Eqn. (3)).
When g is integral, the guarantee of the greedy algorithm is Hg , H(maxj g(j)), where
Pd
H(d) = i=1 1i [29] (henceforth we will use Hg for this quantity). This factor is tight up to lowerorder terms [3]. Furthermore, since this algorithm directly solves SSC, we call it the primal greedy.
We could also solve SSC by looking at its dual, which is SK [28]. Although SSC does not admit any
constant-factor approximation algorithms [3], we can obtain a constant-factor bi-criterion guarantee:
Lemma 4.2. Using the greedy algorithm for SK [28] as the approximation oracle in Algorithm 3
provides a [1 + , 1 ? e?1 ] bi-criterion approximation algorithm for SSC, for any > 0.
We call this the dual greedy. This result follows immediately from the guarantee of the submodular
cost knapsack problem [28] and Theorem 3.1. We remark that we can also use a simpler version
of the greedy iteration at every iteration [21, 17] and we obtain a guarantee of (1 + , 1/2(1 ? e?1 )).
In practice, however, both these factors are almost 1 and hence the simple variant of the greedy
algorithm suffices.
Iterated Submodular Set Cover (ISSC): We next investigate an algorithm for the general SCSC
problem when both f and g are submodular. The idea here is to iteratively solve the submodular
set cover problem which can be done by replacing f by a modular upper bound at every iteration.
In particular, this can be seen as a variant of Algorithm 1, where we start with X 0 = ? and
choose f?t (X) = mfX t (X) at every iteration. The surrogate problem at each iteration becomes
min{mfX t (X)|g(X) ? c}. Hence, each iteration is an instance of SSC and can be solved nearly
optimally using the greedy algorithm. We can continue this algorithm for T iterations or until
convergence. An analysis very similar to the ones in [9, 13] will reveal polynomial time convergence.
Since each iteration is only the greedy algorithm, this approach is also highly practical and scalable.
K H
g g
n
Theorem 4.3. ISSC obtains an approximation factor of 1+(Kg ?1)(1??
? 1+(n?1)(1??
Hg where
f)
f)
Kg = 1 + max{|X| : g(X) < c} and Hg is the approximation factor of the submodular set cover
using g.
5
From the above, it is clear that Kg ? n. Notice also that Hg is essentially a log-factor. We also
see an interesting effect of the curvature ?f of f . When f is modular (?f = 0), we recover the
approximation guarantee of the submodular set cover problem. Similarly, when f has restricted
curvature, the guarantees can be much better. Moreover, the approximation guarantee already holds
after the first iteration, so additional iterations can only further improve the objective.
Ellipsoidal Approximation based Submodular Set Cover (EASSC): In this setting, we use the
ellipsoidal approximation discussed in ?2. We can compute the
version of f
??f -curve-normalized
?
?
f
(f , see ?2), and then p
compute its ellipsoidal approximation w . We then define the function
P
f?(X) = f ea (X) = ?f wf ? (X) + (1 ? ?f ) j?X f (j) and use this as the surrogate function f?
for f . We choose g? as g itself. The surrogate problem becomes:
?
?
?
? q
X
(4)
f (j) g(X) ? c .
min ?f wf ? (X) + (1 ? ?f )
?
?
j?X
While function f?(X) = f ea (X) is not modular, it is a weighted sum of a concave over modular
function and a modular function.
p Fortunately, we can use the result from [26], where they show
that any function of the form of w1 (X) + w2 (X) can be optimized over any polytope P with an
approximation factor of ?(1 + ) for any > 0, where ? is the approximation factor of optimizing
a modular function over P. The complexity of this algorithm is polynomial in n and 1 . We use
their algorithm to minimize f ea (X) over the submodular set cover constraint and hence we call this
algorithm EASSC.
?
n log nH
g
Theorem 4.4. EASSC obtains a guarantee of O( 1+(?n log n?1)(1??
), where Hg is the approximaf)
tion guarantee of the set cover problem.
If the function f has ?f = 1, we can use a much simpler algorithm. In particular, we can minimize
(f ea (X))2 = wf (X) at every iteration, giving a surrogate problem of the form min{wf (X)|g(X) ?
c}. This is directly an instance of SSC, and in contrast to EASSC, we just need to solve SSC once.
We call this algorithm EASSCc.
p
?
Corollary 4.5. EASSCc obtains an approximation guarantee of O( n log n Hg ).
4.2
Approximation Algorithms for SCSK
In this section, we describe our approximation algorithms for SCSK. We note the dual nature of
the algorithms in this current section to those given in ?4.1. We first investigate a special case, the
submodular knapsack (SK), and then provide three algorithms, two of them (Gr and ISK) being
practical with slightly weaker theoretical guarantee, and another one (EASK) which is not scalable
but has the tightest guarantee.
Submodular Cost Knapsack (SK): We start with a special case of SCSK (Problem 2), where f is
a modular function and g is a submodular function. In this case, SCSK turns into the SK problem for
which the greedy algorithm with partial enumeration provides a 1 ? e?1 approximation [28]. The
greedy algorithm can be seen as an instance of Algorithm 1 with g? being the modular lower bound of
g and f? being f , which is already modular. In particular, define:
?
g(j|Si?1
)
?
?
?(i) ? argmax
j
?
/
S
,
f
(S
?
{j})
?
b
,
(5)
i?1
i?1
f (j)
where the remaining elements are chosen arbitrarily. The following is an informal description of the
result described formally in [11].
Lemma 4.6. Choosing the surrogate function f? as f and g? as h? (with ? defined in eqn (5)) in
Algorithm 1 with appropriate initialization obtains a guarantee of 1 ? 1/e for SK.
Greedy (Gr): A similar greedy algorithm can provide approximation guarantees for the general
SCSK problem, with submodular f and g. Unlike the knapsack case in (5), however, at iteration
?
i we choose an element j ?
/ Si?1 : f (Si?1
? {j}) ? b which maximizes g(j|Si?1 ). In terms of
Algorithm 1, this is analogous to choosing a permutation, ? such that:
?
?
?
?(i) ? argmax{g(j|Si?1
)|j ?
/ Si?1
, f (Si?1
? {j}) ? b}.
6
(6)
K ??
Theorem 4.7. The greedy algorithm for SCSK obtains an approx. factor of ?1g (1 ? ( fKf g )kf ) ?
1
Kf , where Kf = max{|X| : f (X) ? b} and kf = min{|X| : f (X) ? b & ?j ? X, f (X ?j) > b}.
In the worst case, kf = 1 and Kf = n, in which case the guarantee is 1/n. The bound above
follows from a simple observation that the constraint {f (X) ? b} is down-monotone for a monotone
function f . However, in this variant, we do not use any specific information about f . In particular
it holds for maximizing a submodular function g over any down monotone constraint [2]. Hence
it is conceivable that an algorithm that uses both f and g to choose the next element could provide
better bounds. We do not, however, currently have the analysis for this.
Iterated Submodular Cost Knapsack (ISK): Here, we choose f?t (X) as a modular upper bound
of f , tight at X t . Let g?t = g. Then at every iteration, we solve max{g(X)|mfX t (X) ? b}, which is
a submodular maximization problem subject to a knapsack constraint (SK). As mentioned above,
P
greedy can solve this nearly optimally. We start with X 0 = ?, choose f?0 (X) = j?X f (j) and then
iteratively continue this process until convergence (note that this is an ascent algorithm). We have the
following theoretical guarantee:
t
t
?1
?
?
Theorem 4.8. Algorithm
n ISK obtains a set X such that g(X
o ) ? (1?e )g(X), where X is the optib(1+(Kf ?1)(1??f )
mal solution of max g(X) | f (X) ?
and where Kf = max{|X| : f (X) ? b}.
Kf
It is worth pointing out that the above bound holds even after the first iteration of the algorithm. It is
interesting to note the similarity between this approach and ISSC. Notice that the guarantee above is
not a standard bi-criterion approximation. We show in the extended version [11] that with a simple
transformation, we can obtain a bicriterion guarantee.
Ellipsoidal Approximation based Submodular Cost Knapsack (EASK): Choosing the Ellipsoidal Approximation f ea of f as a surrogate function, we obtain a simpler problem:
?
?
q
?
?
X
?
max g(X) ?f wf (X) + (1 ? ?f )
f (j) ? b .
(7)
?
?
j?X
In order to solve this problem, we look at its dual problem (i.e., Eqn. (4)) and use Algorithm 2 to
convert the guarantees. We call this procedure EASK. We then obtain guarantees very similar to
Theorem 4.4.
h
i
?
n log nHg
Lemma 4.9. EASK obtains a guarantee of 1 + , O( 1+(?n log n?1)(1??
)
.
)
f
In the case when the submodular function has a curvature ?f = 1, we can actually provide a simpler
algorithm without needing to use the conversion algorithm
(Algorithm 2). In this case, we can
p
directly choose the ellipsoidal approximation of f as wf (X) and solve the surrogate problem:
max{g(X) : wf (X) ? b2 }. This surrogate problem is a submodular cost knapsack problem, which
we can solve using the greedy algorithm. We call this algorithm EASKc. This guarantee is tight up to
log factors if ?f = 1.
?
Corollary 4.10. Algorithm EASKc obtains a bi-criterion guarantee of [1 ? e?1 , O( n log n)].
4.3
Extensions beyond SCSC and SCSK
SCSC and SCSK can in fact be extended to more flexible and complicated constraints which can arise
naturally in many applications [18, 8]. These include multiple covering and knapsack constraints ?
i.e., min{f (X)|gi (X) ? ci , i = 1, 2, ? ? ? k} and max{g(X)|fi (X) ? bi , i = 1, 2, ? ? ? k}, and robust
optimization problems like max{mini gi (X)|f (X) ? b}, where the functions f, g, fi ?s and gi ?s are
submodular. We also consider SCSC and SCSK with non-monotone submodular functions. Due to
lack of space, we defer these discussions to the extended version of this paper [11].
4.4
Hardness
In this section, we provide the hardness for Problems 1 and 2. The lower bounds serve to show that
the approximation factors above are almost tight.
7
Theorem 4.11. For any ? > 0, there exists submodular functions with curvature ? such that
no polynomial time algorithm for Problems 1 and 2 achieves a bi-criterion factor better than
?
n1/2?
? = 1+(n1/2? ?1)(1??) for any > 0.
The above result shows that EASSC and EASK meet the bounds above to log factors. We see an
interesting curvature-dependent influence on the hardness. We also see this phenomenon in the
approximation guarantees of our algorithms. In particular, as soon as f becomes modular, the
problem becomes easy, even when g is submodular. This is not surprising since the submodular
set cover problem and the submodular cost knapsack problem both have constant factor guarantees.
5
Experiments
In this section, we empirically compare the performance of the various algorithms discussed in this
paper. We are motivated by the speech data subset selection application [20, 23] with the submodular
function f encouraging limited vocabulary while g tries to achieve acoustic variability. A natural
choice of the function f is a function of the form |?(X)|, where ?(X) is the neighborhood function
on a bipartite graph constructed between the utterances and the words [23]. For thePcoverage function
g, we use two types of coverage: one is a facility location
P function
Pg1 (X) = Pi?V maxj?X sij
while the other is a saturated sum function g2 (X) = i?V min{ j?X sij , ? j?V sij }. Both
these functions are defined in terms of a similarity matrix S = {sij }i,j?V , which we define on the
TIMIT corpus [5], using the string kernel metric [27] for similarity. Since some of our algorithms, like
the Ellipsoidal Approximations, are computationally intensive, we restrict ourselves to 50 utterances.
Fac. Location/ Bipartite Neighbor. Saturated Sum/ Bipartite Neighbor
We compare our different algorithms on
50
Problems 1 and 2 with f being the bipartite
40
300
neighborhood and g being the facility location
30
200
and saturated sum respectively. Furthermore,
20
100
10
in our experiments, we observe that the neigh0
0
borhood function f has a curvature ?f = 1.
0
100
200 250
20
40
60
80
100
f(X)
f(X)
Thus, it suffices to use the simpler versions
Figure 1: Comparison of the algorithms in the text.
of algorithm EA (i.e., algorithm EASSCc and
EASKc). The results are shown in Figure 1. We observe that on the real-world instances, all our
algorithms perform almost comparably. This implies, moreover, that the iterative variants, viz. Gr,
ISSC and ISK, perform comparably to the more complicated EA-based ones, although EASSC and
EASK have better theoretical guarantees. We also compare against a baseline of selecting random
sets (of varying cardinality), and we see that our algorithms all perform much better. In terms of
the running time, computing the Ellipsoidal Approximation for |?(X)| with |V | = 50 takes about 5
hours while all the iterative variants (i.e., Gr, ISSC and ISK) take less than a second. This difference
is much more prominent on larger instances (for example |V | = 500).
EASSCc
ISK
Gr
EASKc
Random
6
g(X)
g(X)
ISSC
ISSC
EASSCc
ISK
Gr
EASKc
Random
Discussions
In this paper, we propose a unifying framework for problems 1 and 2 based on suitable surrogate
functions. We provide a number of iterative algorithms which are very practical and scalable (like
Gr, ISK and ISSC), and also algorithms like EASSC and EASK, which though more intensive,
obtain tight approximation bounds. Finally, we empirically compare our algorithms, and show that
the iterative algorithms compete empirically with the more complicated and theoretically better
approximation algorithms. For future work, we would like to empirically evaluate our algorithms on
many of the real world problems described above, particularly the limited vocabulary data subset
selection application for speech corpora, and the machine translation application.
Acknowledgments: Special thanks to Kai Wei and Stefanie Jegelka for discussions, to Bethany
Herwaldt for going through an early draft of this manuscript and to the anonymous reviewers for
useful reviews. This material is based upon work supported by the National Science Foundation
under Grant No. (IIS-1162606), a Google and a Microsoft award, and by the Intel Science and
Technology Center for Pervasive Computing.
8
References
[1] A. Atamt?urk and V. Narayanan. The submodular knapsack polytope. Discrete Optimization, 2009.
[2] M. Conforti and G. Cornuejols. Submodular set functions, matroids and the greedy algorithm: tight worstcase bounds and some generalizations of the Rado-Edmonds theorem. Discrete Applied Mathematics,
7(3):251?274, 1984.
[3] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 1998.
[4] S. Fujishige. Submodular functions and optimization, volume 58. Elsevier Science, 2005.
[5] J. Garofolo, F. Lamel, L., J. W., Fiscus, D. Pallet, and N. Dahlgren. Timit, acoustic-phonetic continuous
speech corpus. In DARPA, 1993.
[6] M. Goemans, N. Harvey, S. Iwata, and V. Mirrokni. Approximating submodular functions everywhere. In
SODA, pages 535?544, 2009.
[7] A. Guillory and J. Bilmes. Interactive submodular set cover. In ICML, 2010.
[8] A. Guillory and J. Bilmes. Simultaneous learning and covering with adversarial noise. In ICML, 2011.
[9] R. Iyer and J. Bilmes. Algorithms for approximate minimization of the difference between submodular
functions, with applications. In UAI, 2012.
[10] R. Iyer and J. Bilmes. The submodular Bregman and Lov?asz-Bregman divergences with applications. In
NIPS, 2012.
[11] R. Iyer and J. Bilmes. Submodular Optimization with Submodular Cover and Submodular Knapsack
Constraints: Extended arxiv version, 2013.
[12] R. Iyer, S. Jegelka, and J. Bilmes. Curvature and Optimal Algorithms for Learning and Minimizing
Submodular Functions . In NIPS, 2013.
[13] R. Iyer, S. Jegelka, and J. Bilmes. Fast semidifferential based submodular function optimization. In ICML,
2013.
[14] S. Jegelka and J. A. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In
CVPR, 2011.
[15] Y. Kawahara and T. Washio. Prismatic algorithm for discrete dc programming problems. In NIPS, 2011.
[16] H. Kellerer, U. Pferschy, and D. Pisinger. Knapsack problems. Springer Verlag, 2004.
[17] A. Krause and C. Guestrin. A note on the budgeted maximization on submodular functions. Technical
Report CMU-CALD-05-103, Carnegie Mellon University, 2005.
[18] A. Krause, B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. Journal of
Machine Learning Research (JMLR), 9:2761?2801, 2008.
[19] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: Theory,
efficient algorithms and empirical studies. JMLR, 9:235?284, 2008.
[20] H. Lin and J. Bilmes. How to select a good training-data subset for transcription: Submodular active
selection for sequences. In Interspeech, 2009.
[21] H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions.
In NAACL, 2010.
[22] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In The 49th Annual
Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL/HLT2011), Portland, OR, June 2011.
[23] H. Lin and J. Bilmes. Optimal selection of limited vocabulary speech corpora. In Interspeech, 2011.
[24] R. C. Moore and W. Lewis. Intelligent selection of language model training data. In Proceedings of the
ACL 2010 Conference Short Papers, pages 220?224. Association for Computational Linguistics, 2010.
[25] M. Narasimhan and J. Bilmes. A submodular-supermodular procedure with applications to discriminative
structure learning. In UAI, 2005.
[26] E. Nikolova. Approximation algorithms for offline risk-averse combinatorial optimization, 2010.
[27] J. Rousu and J. Shawe-Taylor. Efficient computation of gapped substring kernels on large alphabets.
Journal of Machine Learning Research, 6(2):1323, 2006.
[28] M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint.
Operations Research Letters, 32(1):41?43, 2004.
[29] L. A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica,
2(4):385?393, 1982.
9
| 4911 |@word private:1 version:9 polynomial:3 c0:1 semidifferential:1 bicriteria:1 selecting:1 document:2 current:1 surprising:2 si:13 additive:1 designed:1 greedy:23 short:1 filtered:1 draft:1 provides:4 location:4 simpler:6 constructed:1 become:3 introduce:2 privacy:2 theoretically:1 lov:1 indeed:1 expected:1 hardness:8 themselves:1 multi:1 decreasing:2 encouraging:2 enumeration:1 considering:1 cardinality:1 becomes:5 provided:1 bounded:5 moreover:6 maximizes:1 herwaldt:1 what:1 kg:3 argmin:1 string:1 narasimhan:1 finding:1 transformation:1 guarantee:41 every:7 dahlgren:1 concave:1 interactive:1 exactly:2 grant:1 engineering:2 xv:1 meet:1 might:1 garofolo:1 acl:2 initialization:1 borhood:1 limited:4 bi:10 practical:4 acknowledgment:1 practice:3 union:1 procedure:3 empirical:1 word:1 convenience:1 cannot:1 selection:11 risk:1 influence:1 context:2 rogate:1 optimize:1 reviewer:1 center:1 maximizing:6 convex:1 assigns:1 immediately:1 notion:1 analogous:1 exact:1 programming:1 us:1 trick:1 element:6 recognition:1 particularly:3 cut:1 cooperative:2 role:1 electrical:2 solved:4 worst:4 mal:1 ensures:2 connected:1 averse:1 fiscus:1 mentioned:1 pd:1 rado:1 complexity:4 gapped:1 depend:1 solving:3 tight:12 algo:2 singh:1 serve:1 upon:1 bipartite:4 darpa:1 represented:1 various:1 alphabet:1 fac:1 describe:2 fast:1 choosing:9 refined:1 neighborhood:2 kawahara:1 apparent:1 heuristic:1 posed:2 modular:28 solve:9 larger:1 kai:1 cvpr:1 mfx:10 gi:3 transform:3 itself:1 online:1 sequence:1 propose:1 iff:1 achieve:2 description:2 scalability:1 convergence:3 asymmetry:1 depending:1 coupling:1 b0:1 solves:1 strong:1 coverage:3 involves:1 implies:2 submodularity:2 closely:1 duals:1 human:1 material:1 require:2 suffices:2 generalization:1 anonymous:1 extension:2 hold:4 supergradients:1 mm:2 ground:2 algorithmic:1 pointing:1 optimizer:3 achieves:2 early:1 applicable:1 combinatorial:3 currently:1 weighted:1 minimization:2 sensor:7 always:2 gaussian:1 varying:1 corollary:2 pervasive:1 viz:1 june:1 improvement:1 portland:1 polyhedron:1 contrast:1 adversarial:1 baseline:1 wf:12 elsevier:1 economy:1 dependent:2 typically:2 relation:1 going:1 dual:4 flexible:1 constrained:2 special:7 mutual:2 once:2 washington:4 identical:2 look:1 icml:3 nearly:2 future:1 np:2 report:1 intelligent:1 inherent:1 simultaneously:4 national:1 divergence:1 cornuejols:1 optib:1 maxj:2 argmax:2 ourselves:1 n1:2 microsoft:1 interest:1 investigate:4 highly:1 evaluation:1 wellstudied:1 saturated:3 extreme:2 rishabh:1 primal:1 hg:7 chain:2 bregman:2 integral:1 edge:1 partial:1 taylor:1 theoretical:4 instance:10 cover:23 maximization:3 cost:15 subset:11 entry:1 gr:7 optimally:4 guillory:2 referring:1 thanks:1 w1:1 again:1 nhg:1 satisfied:1 containing:1 choose:12 ssc:14 henceforth:5 admit:2 diversity:1 b2:1 includes:1 multiplicative:1 tion:1 try:1 start:4 recover:1 complicated:3 defer:3 timit:2 majorization:1 contribution:1 minimize:3 generalize:1 iterated:2 comparably:2 substring:1 bilmes:15 worth:1 simultaneous:1 minj:3 reach:1 against:1 energy:1 involved:1 naturally:6 proof:1 gain:1 improves:2 ea:12 actually:1 manuscript:1 urk:1 supermodular:1 wherein:1 wei:1 formulation:1 done:1 though:1 furthermore:4 xa:6 just:2 stage:1 until:2 d:3 hand:1 eqn:3 replacing:1 lack:2 google:1 defines:2 reveal:1 perhaps:1 facilitate:1 effect:1 naacl:1 normalized:4 concept:1 cald:1 facility:2 hence:8 iteratively:3 moore:1 conditionally:1 interspeech:2 covering:3 criterion:9 prominent:1 vte:2 demonstrate:1 lamel:1 fi:2 polymatroid:3 empirically:5 nh:1 volume:1 belong:1 he:1 discussed:2 association:2 refer:2 mellon:1 approx:9 mathematics:1 similarly:4 submodular:94 language:2 shawe:1 phrasing:1 longer:1 similarity:3 curvature:16 own:1 showed:1 optimizing:2 apart:1 vtr:2 phonetic:1 certain:4 verlag:1 harvey:1 binary:1 arbitrarily:2 continue:2 meeting:1 preserving:2 seen:3 additional:2 fortunately:2 guestrin:3 maximize:1 ii:1 branch:1 rv:1 multiple:1 needing:1 stem:1 ing:1 technical:1 match:1 characterized:1 long:1 lin:4 issc:9 award:1 feasibility:1 schematic:1 variant:5 scalable:3 essentially:2 metric:1 cmu:1 rousu:1 arxiv:1 iteration:16 sometimes:1 represent:1 kernel:2 background:1 want:1 subdifferential:1 krause:3 addressed:1 source:1 leaving:1 crucial:1 w2:1 unlike:2 asz:1 ascent:1 subject:4 fujishige:1 call:9 near:2 constraining:1 easy:2 nikolova:1 restrict:1 idea:4 prototype:1 intensive:2 motivated:2 utility:1 akin:3 argminj:1 speech:6 remark:1 useful:3 clear:2 discount:1 ellipsoidal:11 narayanan:1 notice:2 bulk:1 edmonds:1 discrete:5 carnegie:1 shall:3 transformable:1 key:1 threshold:1 budgeted:2 graph:2 subgradient:4 monotone:8 sum:4 convert:1 prismatic:1 compete:1 everywhere:1 letter:1 soda:1 throughout:1 almost:5 bicriterion:3 bound:31 guaranteed:4 oracle:1 annual:1 occur:2 placement:4 constraint:14 as2:1 sviridenko:1 hy:3 extremely:1 min:8 span:2 formulating:1 optimality:1 department:2 pg1:1 slightly:1 feige:1 intimately:1 making:2 modification:1 intuitively:1 restricted:1 sij:4 computationally:1 ln:1 previously:1 turn:2 serf:1 end:3 informal:1 available:1 tightest:2 operation:1 observe:3 appropriate:4 knapsack:20 original:1 remaining:2 nlp:1 include:1 running:1 linguistics:2 unifying:1 giving:1 approximating:3 classical:1 objective:4 already:3 quantity:2 occurs:2 mirrokni:1 surrogate:21 said:1 conceivable:1 distance:1 polytope:2 assuming:1 sur:1 modeled:1 relationship:1 ellipsoid:1 multiplicatively:1 minimizing:6 providing:1 mini:1 conforti:1 unfortunately:1 subproblems:2 stated:1 summarization:3 perform:3 upper:11 conversion:1 observation:2 defining:2 subsume:2 communication:1 extended:6 witness:1 precise:1 looking:1 variability:1 dc:1 cast:1 optimized:2 acoustic:3 hour:1 nip:3 address:2 beyond:2 below:1 including:1 max:11 suitable:2 natural:2 superdifferentials:1 improve:1 technology:2 log1:2 stefanie:1 utterance:2 text:1 review:1 literature:1 kf:9 determining:1 permutation:3 lowerorder:1 interesting:4 wolsey:2 foundation:1 rkiyer:1 purchasing:1 jegelka:4 s0:1 pi:1 translation:4 supported:1 soon:1 offline:1 weaker:2 neighbor:2 matroids:1 curve:2 vocabulary:6 world:6 xn:1 polynomially:2 approximate:6 obtains:8 bethany:1 transcription:1 active:2 uai:2 corpus:4 discriminative:1 alternatively:2 search:2 iterative:6 continuous:1 sk:10 modularity:1 nature:1 robust:2 investigated:1 main:4 motivation:2 noise:1 arise:1 x1:1 referred:1 intel:1 slow:1 position:1 wish:1 exponential:1 fortunate:1 mcmahan:1 jmlr:2 theorem:10 down:2 pallet:1 specific:2 showing:1 gupta:1 intractable:1 exists:1 ci:1 iyer:6 budget:3 easier:3 entropy:1 jacm:1 g2:1 springer:1 iwata:1 satisfies:1 worstcase:1 acm:1 lewis:1 pferschy:1 conditional:1 viewed:1 presentation:1 jeff:1 hard:2 change:1 inapproximable:3 determined:1 specifically:1 lemma:5 called:4 total:1 goemans:2 formally:2 select:1 combinatorica:1 phenomenon:1 evaluate:1 washio:1 isk:8 |
4,323 | 4,912 | How to Hedge an Option Against an Adversary:
Black-Scholes Pricing is Minimax Optimal
Jacob Abernethy
University of Michigan
[email protected]
Peter L. Bartlett
University of California at Berkeley
and Queensland University of Technology
[email protected]
Rafael M. Frongillo
Microsoft Research
[email protected]
Andre Wibisono
University of California at Berkeley
[email protected]
Abstract
We consider a popular problem in finance, option pricing, through the lens of an
online learning game between Nature and an Investor. In the Black-Scholes option pricing model from 1973, the Investor can continuously hedge the risk of
an option by trading the underlying asset, assuming that the asset?s price fluctuates according to Geometric Brownian Motion (GBM). We consider a worst-case
model, in which Nature chooses a sequence of price fluctuations under a cumulative quadratic volatility constraint, and the Investor can make a sequence of hedging decisions. Our main result is to show that the value of our proposed game,
which is the ?regret? of hedging strategy, converges to the Black-Scholes option
price. We use significantly weaker assumptions than previous work?for instance,
we allow large jumps in the asset price?and show that the Black-Scholes hedging
strategy is near-optimal for the Investor even in this non-stochastic framework.
1
Introduction
An option is a financial contract that allows the purchase or sale of a given asset, such as a stock,
bond, or commodity, for a predetermined price on a predetermined date. The contract is named as
such because the transaction in question is optional for the purchaser of the contract. Options are
bought and sold for any number of reasons, but in particular they allow firms and individuals with
risk exposure to hedge against potential price fluctuations. Airlines, for example, have heavy fuel
costs and hence are frequent buyers of oil options.
What ought we pay for the privilege of purchasing an asset at a fixed price on a future expiration
date? The difficulty with this question, of course, is that while we know the asset?s previous prices,
we are uncertain as to its future price. In a seminal paper from 1973, Fischer Black and Myron
Scholes introduced what is now known as the Black-Scholes Option Pricing Model, which led to a
boom in options trading as well as a huge literature on the problem of derivative pricing [2]. Black
and Scholes had a key insight that a firm which had sold/purchased an option could ?hedge? against
the future cost/return of the option by buying and selling the underlying asset as its price fluctuates.
Their model is based on stochastic calculus and requires a critical assumption that the asset?s price
behaves according to a Geometric Brownian Motion (GBM) with known drift and volatility.
The GBM assumption in particular implies that (almost surely) an asset?s price fluctuates continuously. The Black-Scholes model additionally requires that the firm be able to buy and sell continuously until the option?s expiration date. Neither of these properties are true in practice: the stock
market is only open eight hours per day, and stock prices are known to make significant jumps even
1
during regular trading. These and other empirical observations have led to much criticism of the
Black-Scholes model.
An alternative model for option pricing was considered1 by DeMarzo et al. [3], who posed the
question: ?Can we construct hedging strategies that are robust to adversarially chosen price fluctuations?? Essentially, the authors asked if we may consider hedging through the lens of regret
minimization in online learning, an area that has proved fruitful, especially for obtaining guarantees
robust to worst-case conditions. Within this minimax option pricing framework, DeMarzo et al. provided a particular algorithm resembling the Weighted Majority and Hedge algorithms [5, 6] with a
nice bound.
Recently, Abernethy et al. [1] took the minimax option pricing framework a step further, analyzing
the zero-sum game being played between an Investor, who is attempting to replicate the option
payoff, and Nature, who is sequentially setting the price changes of the underlying asset. The
Investor?s goal is to ?hedge? the payoff of the option as the price fluctuates, whereas Nature attempts
to foil the Investor by choosing a challenging sequence of price fluctuations. The value of this game
can be interpreted as the ?minimax option price,? since it is what the Investor should pay for the
option against an adversarially chosen price path. The main result of Abernethy et al. was to show
that the game value approaches the Black-Scholes option price as the Investor?s trading frequency
increases. Put another way, the minimax price tends to the option price under the GBM assumption.
This lends significant further credibility to the Black-Scholes model, as it suggests that the GBM
assumption may already be a ?worst-case model? in a certain sense.
The previous result, while useful and informative, left two significant drawbacks. First, their techniques used minimax duality to compute the value of the game, but no particular hedging algorithm
for the Investor is given. This is in contrast to the Black-Scholes framework (as well as to the DeMarzo et al.?s result [3]) in which a hedging strategy is given explicitly. Second, the result depended
on a strong constraint on Nature?s choice of price path: the multiplicative price variance is uniformly
constrained, which forbids price jumps and other large fluctuations.
In this paper, we resolve these two drawbacks. We consider the problem of minimax option pricing
with much weaker constraints: we restrict the sum over the length of the game of the squared price
fluctuations to be no more than a constant c, and we allow arbitrary price jumps, up to a bound ?. We
show that the minimax option price is exactly the Black-Scholes price of the option, up to an additive
term of O(c? 1/4 ). Furthermore, we give an explicit hedging strategy: this upper bound is achieved
when the Investor?s strategy is essentially a version of the Black-Scholes hedging algorithm.
2
The Black-Scholes Formula
Let us now briefly review the Black-Scholes pricing formula and hedging strategy. The derivation
requires some knowledge of continuous random walks and stochastic calculus?Brownian motion,
It?o?s Lemma, a second-order partial differential equation?and we shall only give a cursory treatment of the material. For further development we recommend a standard book on stochastic calculus, e.g. [8]. Let us imagine we have an underlying asset A whose price is fluctuating. We let
W (t) be a Brownian motion, also known as a Weiner process, with zero drift and unit variance; in
particular, W (0) = 0 and W (t) ? N (0, t) for t > 0. We shall imagine that A?s price path G(t) is
described by a geometric Brownian motion with drift ? and volatility , which we can describe via
d
the definition of a Brownian motion: G(t) = exp{(? 12 2 )t + W (t)}.
If an Investor purchases a European call option on some asset A (say, MSFT stock) with a strike
price of K > 0 that matures at time T , then the Investor has the right to buy a share of A at price K
at time T . Of course, if the market price of A at T is G(T ), then the Investor will only ?exercise?
the option if G(T ) > K, since the Investor has no benefit of purchasing the asset at a price higher
than the market price. Hence, the payoff of a European call option has a profit function of the form
max{0, G(T ) K}. Throughout the paper we shall use gEC (x) := max{0, x K} to refer to the
payout of the European call when the price of asset A at time T is x (the parameter K is implicit).
1
Although it does not have quite the same flavor, a similar approach was explored in the book of Vovk and
Shafer [7].
2
We assume the current time is t. The Black-Scholes derivation begins with a guess: assume that the
?value? of the European call option can be described by a smooth function V(G(t), t), depending
only on the current price of the asset G(t) and the time to expiration T
t. We can immediately define a boundary condition on V, since at the expiration time T the value of the option is
V(G(T ), 0) = gEC (G(T )).
So how do we arrive at a value for the option at another time point t? We assume the Investor has
a hedging strategy, (x, t) that determines the amount to invest when the current price is x and the
time is t. Notice that if the asset?s current price is G(t) and the Investor purchases (G(t), t) dollars
of asset A at t, then the incremental amount of money made in an infinitesimal amount of time is
(G(t), t) dG/G(t), since dG/G(t) is the instantaneous multiplicative price change at time t. Of
course, if the earnings of the Investor are guaranteed to exactly cancel out the infinitesimal change
in the value of the option dV(G(t), t), then the Investor is totally hedged with respect to the option
payout for any sample of G for the remaining time to expiration. In other words, we hope to achieve
dV(G, t) = (G, t) dG/G. However, by It?o?s Lemma [8] we have the following useful identity:
@V
@V
1 2 2 @2V
dG +
dt +
G
dt.
@x
@t
2
@x2
Black and Scholes proposed a generic hedging strategy, that the investor should invest
dV(G, t) =
(1)
@V
(2)
@x
dollars in the asset A when the price of A is x at time t. As mentioned, the goal of the Investor is
to hedge out risk so that it is always the case that dV(G, t) = (G, t) dG/G. Combining this goal
with Equations (1) and (2), we have
(x, t) = x
@V
1 2 2 @2V
+
x
= 0.
@t
2
@x2
Notice the latter is an entirely non-stochastic PDE, and indeed it can be solved explicitly:
V(x, t) = EY [gEC (x ? exp(Y ))]
where
Y ? N(
1
2
2
(T
t),
2
(T
t))
(3)
(4)
Remark: While we have described the derivation for the European call option, with payoff function
gEC , the analysis above does not rely on this specific choice of g. We refer the reader to a standard
text on asset pricing for more on this [8].
3
The Minimax Hedging Game
We now describe a sequential decision protocol in which an Investor makes a sequence of trading
decisions on some underlying asset, with the goal of hedging away the risk of some option (or other
financial derivative) whose payout depends on the final price of the asset at the expiration time T .
We assume the Investor is allowed to make a trading decision at each of n time periods, and before
making this trade the investor observes how the price of the asset has changed since the previous
period. Without loss of generality, we can assume that the current time is 0 and the trading periods
occur at {T /n, 2T /n, . . . , 1}, although this will not be necessary for our analysis.
The protocol is as follows.
1: Initial price of asset is S = S0 .
2: for i = 1, 2, . . . , n do
3:
Investor hedges, invests i 2 R dollars in asset.
4:
Nature selects a price fluctuation ri and updates price S
S(1 + ri ).
5:
Investor receives (potentially negative) profit of i ri .
6: end for
Qn
7: Investor is charged the cost of the option, g(S) = g (S0 ? i=1 (1 + ri )).
Stepping back for a moment, we see that the Investor is essentially trying to minimize the following
objective:
!
n
n
Y
X
g S0 ?
(1 + ri )
i ri .
i=1
i=1
3
We can interpret the above
Pn expression as a form of regret: the Investor chose to execute a trading
strategy, earning him Q
i ri , but in hindsight might have rather purchased the option instead,
i=1
n
with a payout of g (S0 ? i=1 (1 + ri )). What is the best hedging strategy the Investor can execute
to minimize the difference between the option payoff and the gains/losses from hedging? Indeed,
how much regret may be suffered against a worst-case sequence of price fluctuations?
Constraining Nature. The cost of playing the above sequential game is clearly going to depend on how much we expect the price to fluctuate. In the original Black-Scholes formulation, the price volatility is a major parameter in the pricing function. In the work of Abernethy et al., a key assumption was that Nature may choose any r1 , . . . , rn with the constraint that
E[ri2 | r1 , . . . , ri 1 ] = O(1/n). 2 Roughly, this constraint means that in any ?-sized time interval,
the price fluctuation variance shall be no more than ?. This constraint, however, does not allow for
large price jumps during trading. In the present work, we impose a much weaker set of constraints,
described as follows:3
Pn
? TotVarConstraint: The total price fluctuation is bounded by a constant c: i=1 ri2 ? c.
? JumpConstraint: Every price jump |ri | is no more than ?, for some ? > 0 (which may
depend on n).
The first constraint above says that Nature is bounded by how much, in total, the asset?s price path
can fluctuate. The latter says that at no given time can the asset?s price jump more than a given value.
It is worth noting that if c n? 2 then TotVarConstraint is superfluous, whereas JumpConstraint
becomes superfluous if c < ? 2 .
The Minimax Option Price We are now in a position to define the value of the sequential option
pricing game using a minimax formulation. That is, we shall ask how much the Investor loses
when making optimal trading decisions against worst-case price fluctuations chosen by Nature. Let
(n)
V? (S; c, m) be the value of the game, measured by the investor?s loss, when the asset?s current
price is S 0, the TotVarConstraint is c 0, the JumpConstraint is ? > 0, the total number of
trading rounds are n 2 N, and there are 0 ? m ? n rounds remaining. We define recursively:
(n)
V?
(S; c, m) = inf
(n)
sup
r + V?
2R r : |r|?min{?,pc}
(n)
with the base case V?
((1 + r)S; c
r2 , m
1),
(S; c, 0) = g(S). Notice that the constraint under the supremum en(n)
forces both TotVarConstraint and JumpConstraint. For simplicity, we will write V?
(n)
V? (S; c, n).
(5)
(S; c) :=
This is the value of the game that we are interested in analyzing.
Towards establishing an upper bound on the value (5), we shall discuss the question of how to
choose the hedge parameter on each round. We can refer to a ?hedging strategy? in this game as
a function of the tuple (S, c, m, n, ?, g(?)) that returns hedge position. In our upper bound, in fact
we need only consider hedging strategies (S, c) that depend on S and c; there certainly will be a
dependence on g(?) as well but we leave this implicit.
4
Asymptotic Results
The central focus of the present paper is the following question: ?For fixed c and S, what is the
(n)
asymptotic behavior of the value V? (S; c)?? and ?Is there a natural hedging strategy (S, c) that
(roughly) achieves this value?? In other words, what is the minimax value of the option, as well
as the optimal hedge, when we fix the variance budget c and the asset?s current price S, but let the
number of rounds tend to 1? We now give answers to these questions, and devote the remainder of
the paper to developing the results in detail.
We consider payoff functions g : R0 ! R0 satisfying three constraints:
2
The constraint in [1] was E[ri2 | r1 , . . . , ri 1 ] ? exp(c/n) 1, but this is roughly equivalent.
We note
p that Abernethy et al. [1] also assumed that the multiplicative price jumps |ri | are bounded by
??n = ?( (log n)/n); this is a stronger assumption than what we impose on (?n ) in Theorem 1.
3
4
1. g is convex.
2. g is L-Lipschitz, i.e. |g(x)
g(y)| ? L|x
y|.
3. g is eventually linear, i.e. there exists K > 0 such that g(x) is a linear function for all
x K; in this case we also say g is K-linear.
We believe the first two conditions are strictly necessary to achieve the desired results. The Klinearity may not be necessary but makes our analysis possible. We note that the constraints above
encompass the standard European call and put options.
Henceforth we shall let G be a zero-drift GBM with unit volatility. In particular, we have that
log G(t) ? N ( 12 t, t). For S, c 0, define the function
U (S, c) = EG [g(S ? G(c))],
and observe that U (S, 0) = g(S). Our goal will be to show that U is asymptotically the minimax
price of the option. Most importantly, this function U (S, c) is identical to V(S, 12 (T
c)), the
Black-Scholes value of the option in (4) when the GBM volatility parameter is in the BlackScholes analysis. In particular, analogous to to (3), U (S, c) satisfies a differential equation:
1 2 @2U
S
2 @S 2
@U
= 0.
@c
(6)
The following is our main result of this paper.
Theorem 1. Let S > 0 be the initial asset price and let c > 0 be the variance budget. Assume we
have a sequence {?n } with limn!1 ?n = 0 and lim inf n!1 n?n2 > c. Then
(n)
lim V?n (S; c) = U (S, c).
n!1
This statement tells us that the minimax price of an option, when Nature has a total fluctuation
budget of c, approaches the Black-Scholes price of the option when the time to expiration is c.
This is particularly surprising since our minimax pricing framework made no assumptions as to
the stochastic process generating the price path. This is the same conclusion as in [1], but we
obtained our result with a significantly weaker assumption. Unlike [1], however, we do not show
that the adversary?s minimax optimal stochastic price path necessarily converges to a GBM. The
convergence of Nature?s price path to GBM in [1] was made possible by the uniform per-round
variance constraint.
The previous theorem is the result of two main technical contributions. First, we prove a lower
(n)
bound on the limiting value of V?n (S; c) by exhibiting a simple randomized strategy for Nature
in the form of a stochastic price path, and appealing to the Lindeberg-Feller central limit theorem.
(n)
Second, we prove an O(c? 1/4 ) upper bound on the deviation between V? (S; c) and U (S, c). The
upper bound is obtained by providing an explicit strategy for the Investor:
(S, c) = S
@U (S, c)
@S
and carefully bounding the difference between the output using this strategy and the game value. In
the course of doing so, because we are invoking Taylor?s remainder theorem, we need to bound the
first few derivatives of U (S, c). Bounding these derivatives turns out to be the crux of the analysis;
in particular, it uses the full force of the assumptions on g, including that g is eventually linear, to
avoid the pathological cases when the derivative of g converges to its limiting value very slowly.
5
Lower Bound
(n)
In this section we prove that U (S, c) is a lower bound to the game value V?n (S; c). We note that the
result in this section does not use the assumptions in Theorem 1 that ?n ! 0, nor that g is convex
and eventually linear. In particular, the following result also applies when the sequence (?n ) is a
constant ? > 0.
5
Theorem 2. Let g : R0 ! R0 be an L-Lipschitz function, and let {?n } be a sequence of positive
numbers with lim inf n!1 n?n2 > c. Then for every S, c > 0,
(n)
lim inf V?n (S; c)
U (S, c).
n!1
The proof of Theorem 2 is based on correctly ?guessing? a randomized
p strategy for Nature. For each
n 2 N, define the random variables R1,n , . . . , Rn,n ? Uniform{? c/n} i.i.d. Note that (Ri,n )ni=1
satisfies TotVarConstraint
by construction. Moreover, the assumption lim inf n!1 n?n2 > c imp
plies ?n > c/n for all sufficiently large n, so eventually (Ri,n ) also satisfies JumpConstraint.
We have the following convergence result for (Ri,n ), which we prove in Appendix A.
Lemma 3. Under the same setting as in Theorem 2, we have the convergence in distribution
n
Y
d
(1 + Ri,n ) ! G(c) as n ! 1.
i=1
Moreover, we also have the convergence in expectation
"
!#
n
Y
lim E g S ?
(1 + Ri,n )
= U (S, c).
n!1
(7)
i=1
With the help of Lemma 3, we are now ready to prove Theorem 2.
p
Proof of Theorem 2. Let n be sufficiently large such that n?n2 > c. Let Ri,n ? Uniform{? c/n}
i.i.d., for 1 ? i ? n. As noted above, (Ri,n ) satisfies both TotVarConstraint and JumpConstraint.
Then we have
n
n
? Y
? X
(n)
V?n (S; c) = inf sup ? ? ? inf sup g S ?
(1 + ri )
i ri
1
r1
n
rn
i=1
n
h ? Y
?
inf ? ? ? inf E g S ?
(1 + Ri,n )
1
n
i=1
n
h ? Y
?i
= E g S?
(1 + Ri,n ) .
i=1
n
X
i=1
i Ri,n
i
i=1
The first line follows from unrolling the recursion in the definition (5); the second line from replacing
the supremum over (ri ) with expectation over (Ri,n ); and the third line from E[Ri,n ] = 0. Taking
limit on both sides and using (7) from Lemma 3 give us the desired conclusion.
6
Upper Bound
(n)
In this section we prove that U (S, c) is an upper bound to the limit of V? (S; c).
Theorem 4. Let g : R0 ! R0 be a convex, L-Lipschitz, K-linear function. Let 0 < ? ? 1/16. Then
for any S, c > 0 and n 2 N, we have
?
?
8
(n)
V? (S; c) ? U (S, c) + 18c + p
LK ? 1/4 .
2?
We remark that the right-hand side of the above bound does not depend on the number of trading
periods n. The key parameter is ?, which determines the size of the largest price jump of the stock.
However, we expect that as the trading frequency increases, the size of the largest price jump will
shrink. Plugging a sequence {?n } in place of ? in Theorem 4 gives us the following corollary.
Corollary 1. Let g : R0 ! R0 be a convex, L-Lipschitz, K-linear function. Let {?n } be a sequence
of positive numbers with ?n ! 0. Then for S, c > 0,
(n)
lim sup V?n (S; c) ? U (S, c).
n!1
Note that the above upper bound relies on the convexity of g, for if g were concave, then we would
have the reverse conclusion:
(n)
V? (S; c) g(S) = g(S ? E[G(c)]) E[g(S ? G(c))] = U (S, c).
Here the first inequality follows from setting all r = 0 in (5), and the second is by Jensen?s inequality.
6
6.1
Intuition
For brevity, we write the partial derivatives Uc (S, c) = @U (S, c)/@c, US (S, c) = @U (S, c)/@S, and
US 2 (S, c) = @ 2 U (S, c)/@S 2 . The proof of Theorem 4 proceeds by providing a ?guess? for the Investor?s action, which is a modification of the original Black-Scholes hedging strategy. Specifically,
when the current price is S and the remaining budget is c, then the Investor invests
(S, c) := SUS (S, c).
(n)
We now illustrate how this strategy gives rise to a bound on V?
(S; c) as stated in Theorem 4. First
(n)
V? (S; c, m
suppose for some m 1 we know that
1) is a rough approximation to U (S, c). Note
2
that a Taylor approximation of the function rm 7! U (S + Srm , c rm
) around U (S, c) gives us
U (S + Srm , c
1 2 2
2
3
rm
Uc (S, c) + rm
S US 2 (S, c) + O(rm
)
2
3
= U (S, c) + rm SUS (S, c) + O(rm
),
2
rm
) = U (S, c) + rm SUS (S, c)
where the last line follows from the Black-Scholes equation (6). Now by setting = SUS (S, c) in
the definition (5), and using the assumption and the Taylor approximation above, we obtain
(n)
V?
(S; c, m) = inf
(n)
sup
rm + V ?
2R |r |?min{?,pc}
m
(n)
(S + Srm ; c
? sup
rm SUS (S, c) + V?
= sup
rm SUS (S, c) + U (S + Srm , c
rm
rm
(S + Srm ; c
2
rm
,m
2
rm
,m
1)
1)
2
rm
) + (approx terms)
3
= U (S, c) + O(rm
) + (approx terms).
3
In other words, on each round of the game we add an O(rm
) term to the approximation error. By the
Pn
(n)
time we reach V? (S; c, n) we will have an error term that is roughly on the order of m=1 |rm |3 .
Pn
Pn
2
Since m=1 rm
? c and |rm | ? ? by assumption, we get m=1 |rm |3 ? ?c.
3
The details are more intricate because the error O(rm
) from the Taylor approximation also depends
on S and c. Trading off the dependencies of c and ? leads us to the bound stated in Theorem 4.
6.2
Proof (Sketch) of Theorem 4
In this section we describe an outline of the proof of Theorem 4. Throughout, we assume g is a
convex, L-Lipschitz, K-linear function, and 0 < ? ? 1/16. The proofs of Lemma 5 and Lemma 7
are provided in Appendix B, and Lemma 6 is proved in Appendix C.
p
For S, c > 0 and |r| ? c, we define the (single-round) error term of the Taylor approximation,
?r (S, c) := U (S + Sr, c
r2 )
U (S, c)
rSUS (S, c).
(8)
(S, c, m)}nm=0
We also define a sequence {?
to keep track of the cumulative errors. We define
this sequence by setting ?(n) (S, c, 0) = 0, and for 1 ? m ? n,
(n)
?(n) (S, c, m) :=
sup p ?r (S, c)
|r|?min{?, c}
+ ?(n) (S + Sr, c
r2 , m
1).
(9)
For simplicity, we write ?(n) (S, c) ? ?(n) (S, c, n). Then we have the following result, which
(n)
formalizes the notion from the preceding section that V? (S; c, m) is an approximation to U (S, c).
Lemma 5. For S, c > 0, n 2 N, and 0 ? m ? n, we have
(n)
V?
(S; c, m) ? U (S, c) + ?(n) (S, c, m).
(10)
It now remains to bound ?(n) (S, c) from above. A key step in doing so is to show the following
bounds on ?r . This is where the assumptions that g be L-Lipschitz and K-linear are important.
7
p
Lemma 6. For S, c > 0, and |r| ? min{1/16, c/8}, we have
?
?r (S, c) ? 16LK max{c 3/2 , c 1/2 } |r|3 + max{c
Moreover, for S > 0, 0 < c ? 1/4, and |r| ?
p
2
,c
1/2
c, we also have
?
} r4 .
4LK r2
?r (S, c) ? p
?p .
c
2?
(11)
(12)
Using Lemma 6, we have the following bound on ?(n) (S, c).
Lemma 7. For S, c > 0, n 2 N, and 0 < ? ? 1/16, we have
?
?
8
(n)
? (S, c) ? 18c + p
LK ? 1/4 .
2?
Proof (sketch). By unrolling the inductive definition (9), we can write ?(n) (S, c) as the supremum
of f (r1 , . . . , rn ), where
f (r1 , . . . , rn ) =
n
X
m=1
? mY1
?rm S
(1 + ri ), c
i=1
Pn
m
X1
i=1
?
ri2 .
2
Let (r1 , . .p
. , rn ) be such that |rm | ? ? and m=1 rm
? c. We will show that f (r1 , . . . , rn ) ?
p
Pn?
1/4
2
(18c + 8/ 2?) LK ? . Let 0 ? n? ? n be the largest index such that m=1
rm
?c
?. We
split the analysis into two parts.
1. For 1 ? m ? min{n, n? + 1}: By (11) from Lemma 6 and a little calculation, we have
? mY1
?rm S
(1 + ri ), c
m
X1
i=1
i=1
?
2
ri2 ? 18LK ? 1/4 rm
.
Summing over 1 ? m ? min{n, n? + 1} then gives us
? mY1
?rm S
(1+ri ), c
min{n, n? +1}
X
m=1
i=1
2. For n? + 2 ? m ? n (if n? ? n
?
?rm S
Therefore,
n
X
m=n? +2
?
?rm S
m
Y1
i=1
m
Y1
m
X1
i=1
min{n, n? +1}
?
X
2
ri2 ? 18LK ? 1/4
rm
? 18LK ? 1/4 c.
m=1
2): By (12) from Lemma 6, we have
m
X1
(1 + ri ), c
i=1
(1 + ri ), c
i=1
m
X1
ri2
i=1
?
? 4LK
r2
ri2 ? p
? pPnm 2 .
2?
i=m ri
4LK
? p
2?
n
X
m=n? +2
r2
pPnm
2
i=m ri
8LK 1/4
? p
? ,
2?
where the last inequality follows from Lemma 8 in Appendix B.
Combining the two cases above gives us the desired conclusion.
Proof of Theorem 4. Theorem 4 follows immediately from Lemma 5 and Lemma 7.
Acknowledgments. We gratefully acknowledge the support of the NSF through grant CCF1115788 and of the ARC through Australian Laureate Fellowship FL110100281.
8
References
[1] J. Abernethy, R. M. Frongillo, and A. Wibisono. Minimax option pricing meets Black-Scholes
in the limit. In Howard J. Karloff and Toniann Pitassi, editors, STOC, pages 1029?1040. ACM,
2012.
[2] F. Black and M. Scholes. The pricing of options and corporate liabilities. The Journal of Political
Economy, pages 637?654, 1973.
[3] P. DeMarzo, I. Kremer, and Y. Mansour. Online trading algorithms and robust option pricing.
In Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 477?486.
ACM, 2006.
[4] R. Durrett. Probability: Theory and Examples (Fourth Edition). Cambridge University Press,
2010.
[5] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational learning theory, pages 23?37. Springer, 1995.
[6] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994.
[7] G. Shafer and V. Vovk. Probability and Finance: It?s Only a Game!, volume 373. WileyInterscience, 2001.
[8] J. M. Steele. Stochastic Calculus and Financial Applications, volume 45. Springer Verlag,
2001.
9
| 4912 |@word briefly:1 version:1 stronger:1 replicate:1 open:1 calculus:4 queensland:1 jacob:1 invoking:1 profit:2 recursively:1 moment:1 initial:2 current:8 surprising:1 additive:1 informative:1 predetermined:2 update:1 guess:2 warmuth:1 cursory:1 boosting:1 earnings:1 gec:4 differential:2 symposium:1 prove:6 intricate:1 indeed:2 behavior:1 market:3 nor:1 roughly:4 buying:1 resolve:1 little:1 lindeberg:1 totally:1 becomes:1 provided:2 begin:1 underlying:5 bounded:3 moreover:3 unrolling:2 fuel:1 what:7 interpreted:1 hindsight:1 ought:1 guarantee:1 formalizes:1 berkeley:5 commodity:1 every:2 concave:1 finance:2 exactly:2 rm:34 sale:1 unit:2 grant:1 before:1 positive:2 tends:1 depended:1 limit:4 analyzing:2 establishing:1 meet:1 fluctuation:12 path:8 black:25 chose:1 might:1 r4:1 suggests:1 challenging:1 acknowledgment:1 practice:1 regret:4 area:1 empirical:1 ri2:8 significantly:2 word:3 regular:1 get:1 put:2 risk:4 seminal:1 fruitful:1 equivalent:1 charged:1 hedged:1 resembling:1 exposure:1 convex:5 simplicity:2 immediately:2 insight:1 importantly:1 financial:3 notion:1 expiration:7 analogous:1 limiting:2 imagine:2 construction:1 suppose:1 us:1 satisfying:1 particularly:1 solved:1 worst:5 trade:1 observes:1 mentioned:1 feller:1 intuition:1 convexity:1 asked:1 depend:4 selling:1 stock:5 derivation:3 describe:3 tell:1 choosing:1 abernethy:6 firm:3 whose:2 fluctuates:4 posed:1 quite:1 say:4 fischer:1 final:1 online:3 sequence:12 took:1 remainder:2 frequent:1 combining:2 date:3 achieve:2 invest:2 convergence:4 r1:9 generating:1 incremental:1 converges:3 leave:1 volatility:6 depending:1 help:1 illustrate:1 measured:1 strong:1 c:3 trading:15 implies:1 australian:1 exhibiting:1 drawback:2 stochastic:9 material:1 crux:1 fix:1 generalization:1 strictly:1 sufficiently:2 around:1 exp:3 major:1 achieves:1 gbm:9 bond:1 him:1 largest:3 weighted:2 minimization:1 hope:1 rough:1 clearly:1 always:1 rather:1 pn:7 frongillo:2 fluctuate:2 avoid:1 corollary:2 focus:1 contrast:1 political:1 criticism:1 sense:1 dollar:3 economy:1 going:1 fl110100281:1 selects:1 interested:1 development:1 constrained:1 uc:2 construct:1 identical:1 sell:1 adversarially:2 cancel:1 imp:1 purchase:3 future:3 recommend:1 few:1 pathological:1 dg:5 individual:1 microsoft:1 privilege:1 attempt:1 huge:1 certainly:1 pc:2 superfluous:2 tuple:1 partial:2 necessary:3 taylor:5 walk:1 desired:3 blackscholes:1 littlestone:1 uncertain:1 instance:1 cost:4 deviation:1 uniform:3 srm:5 dependency:1 answer:1 chooses:1 randomized:2 contract:3 off:1 continuously:3 squared:1 central:2 nm:1 choose:2 slowly:1 payout:4 henceforth:1 book:2 derivative:6 return:2 potential:1 boom:1 explicitly:2 depends:2 hedging:20 multiplicative:3 doing:2 sup:8 investor:35 option:49 raf:1 contribution:1 minimize:2 ni:1 variance:6 who:3 worth:1 asset:29 reach:1 andre:1 definition:4 infinitesimal:2 against:6 frequency:2 proof:8 gain:1 proved:2 treatment:1 popular:1 ask:1 knowledge:1 lim:7 carefully:1 back:1 matures:1 higher:1 day:1 dt:2 formulation:2 execute:2 shrink:1 generality:1 furthermore:1 implicit:2 until:1 hand:1 receives:1 sketch:2 replacing:1 pricing:17 believe:1 oil:1 steele:1 true:1 inductive:1 hence:2 eg:1 round:7 game:17 during:2 noted:1 trying:1 outline:1 theoretic:1 motion:6 instantaneous:1 recently:1 behaves:1 stepping:1 volume:2 interpret:1 significant:3 refer:3 cambridge:1 credibility:1 approx:2 gratefully:1 had:2 money:1 pitassi:1 base:1 add:1 brownian:6 inf:10 reverse:1 certain:1 verlag:1 inequality:3 impose:2 preceding:1 ey:1 r0:8 surely:1 strike:1 period:4 encompass:1 full:1 corporate:1 smooth:1 technical:1 calculation:1 pde:1 plugging:1 essentially:3 expectation:2 achieved:1 whereas:2 fellowship:1 interval:1 suffered:1 limn:1 unlike:1 airline:1 sr:2 tend:1 bought:1 call:6 near:1 noting:1 constraining:1 split:1 restrict:1 karloff:1 weiner:1 expression:1 bartlett:2 sus:6 peter:1 remark:2 action:1 useful:2 amount:3 schapire:1 nsf:1 notice:3 per:2 correctly:1 track:1 write:4 shall:7 key:4 neither:1 asymptotically:1 sum:2 fourth:1 named:1 arrive:1 almost:1 throughout:2 reader:1 place:1 earning:1 decision:6 appendix:4 entirely:1 bound:20 pay:2 guaranteed:1 played:1 quadratic:1 annual:1 occur:1 constraint:13 x2:2 ri:34 min:8 attempting:1 developing:1 according:2 appealing:1 making:2 modification:1 dv:4 demarzo:4 equation:4 remains:1 discus:1 eventually:4 turn:1 know:2 end:1 umich:1 eight:1 observe:1 fluctuating:1 myron:1 generic:1 away:1 alternative:1 original:2 remaining:3 especially:1 purchased:2 objective:1 question:6 already:1 strategy:20 dependence:1 guessing:1 devote:1 lends:1 majority:2 reason:1 assuming:1 length:1 index:1 providing:2 potentially:1 statement:1 stoc:1 negative:1 rise:1 stated:2 upper:8 observation:1 sold:2 arc:1 acknowledge:1 howard:1 optional:1 payoff:6 y1:2 rn:7 jabernet:1 mansour:1 arbitrary:1 drift:4 introduced:1 california:2 hour:1 scholes:25 able:1 adversary:2 proceeds:1 max:4 including:1 critical:1 difficulty:1 rely:1 force:2 natural:1 msft:1 recursion:1 minimax:17 technology:1 lk:11 ready:1 text:1 nice:1 geometric:3 literature:1 review:1 asymptotic:2 toniann:1 loss:3 expect:2 freund:1 purchasing:2 s0:4 editor:1 playing:1 share:1 heavy:1 foil:1 invests:2 course:4 changed:1 last:2 kremer:1 liability:1 side:2 weaker:4 allow:4 taking:1 benefit:1 boundary:1 cumulative:2 qn:1 author:1 made:3 jump:10 durrett:1 transaction:1 rafael:1 laureate:1 supremum:3 keep:1 sequentially:1 buy:2 summing:1 assumed:1 forbids:1 continuous:1 additionally:1 nature:14 robust:3 obtaining:1 european:6 necessarily:1 protocol:2 main:4 bounding:2 shafer:2 edition:1 n2:4 allowed:1 x1:5 en:1 position:2 explicit:2 exercise:1 ply:1 third:1 formula:2 theorem:20 specific:1 jensen:1 explored:1 r2:6 exists:1 sequential:3 budget:4 flavor:1 michigan:1 led:2 applies:1 springer:2 loses:1 determines:2 satisfies:4 relies:1 hedge:11 acm:3 goal:5 identity:1 sized:1 towards:1 price:72 lipschitz:6 change:3 specifically:1 uniformly:1 vovk:2 lemma:17 lens:2 total:4 duality:1 buyer:1 support:1 latter:2 brevity:1 wibisono:3 wileyinterscience:1 |
4,324 | 4,913 | Small-Variance Asymptotics for Hidden Markov
Models
Anirban Roychowdhury, Ke Jiang, Brian Kulis
Department of Computer Science and Engineering
The Ohio State University
[email protected], {jiangk,kulis}@cse.ohio-state.edu
Abstract
Small-variance asymptotics provide an emerging technique for obtaining scalable
combinatorial algorithms from rich probabilistic models. We present a smallvariance asymptotic analysis of the Hidden Markov Model and its infinite-state
Bayesian nonparametric extension. Starting with the standard HMM, we first derive a ?hard? inference algorithm analogous to k-means that arises when particular variances in the model tend to zero. This analysis is then extended to the
Bayesian nonparametric case, yielding a simple, scalable, and flexible algorithm
for discrete-state sequence data with a non-fixed number of states. We also derive the corresponding combinatorial objective functions arising from our analysis, which involve a k-means-like term along with penalties based on state transitions and the number of states. A key property of such algorithms is that?
particularly in the nonparametric setting?standard probabilistic inference algorithms lack scalability and are heavily dependent on good initialization. A number of results on synthetic and real data sets demonstrate the advantages of the
proposed framework.
1
Introduction
Inference in large-scale probabilistic models remains a challenge, particularly for modern ?big data?
problems. While graphical models are undisputedly important as a way to build rich probability distributions, existing sampling-based and variational inference techniques still leave some applications
out of reach.
A recent thread of research has considered small-variance asymptotics of latent-variable models
as a way to capture the benefits of rich graphical models while also providing a framework for
designing more scalable combinatorial optimization algorithms. Such models are often motivated
by the well-known connection between mixtures of Gaussians and k-means: as the variances of
the Gaussians tend to zero, the mixture of Gaussians model approaches k-means, both in terms of
objectives and algorithms. This approach has recently been extended beyond the standard Gaussian
mixture in various ways?to Dirichlet process mixtures and hierarchical Dirichlet processes [8], to
non-Gaussian observations in the nonparametric setting [7], and to feature learning via the Beta
process [5]. The small-variance analysis for each of these models yields simple algorithms that
feature many of the benefits of the probabilistic models but with increased scalability. In essence,
small-variance asymptotics provides a link connecting some probabilistic graphical models with
non-probabilistic combinatorial counterparts.
Thus far, small-variance asymptotics has been applied only to fairly simple latent-variable models.
In particular, to our knowledge there has been no such analysis for sequential data models such
as the Hidden Markov Model (HMM) nor its nonparametric counterpart, the infinite-state HMM
(iHMM). HMMs are one of the most widely used probabilistic models for discrete sequence data,
with diverse applications including DNA sequence analysis, natural language processing and speech
recognition [4]. The HMMs consist of a discrete hidden state sequence that evolves according
1
to Markov assumptions, along with independent observations at each time step depending on the
hidden state. The learning problem is to estimate the model given only the observation data.
To develop scalable algorithms for sequential data, we begin by applying small-variance analysis to
the standard parametric HMM. In the small-variance limit, we obtain a penalized k-means problem
where the penalties capture the state transitions. Further, a special case of the resulting model yields
segmental k-means [9]. For the nonparametric model we obtain an objective that effectively combines the asymptotics from the parametric HMM with the asymptotics for the Hierarchical Dirichlet
Process [8]. We obtain a k-means-like objective with three penalties: one for state transitions, one
for the number of reachable states out of each state, and one for the number of total states. The
key aspect of our resulting formulation is that, unlike the standard sampler for the infinite-state
HMM, dynamic programming can be used. In particular, we describe a simple algorithm that monotonically decreases the underlying objective function. Finally, we present results comparing our
non-probabilistic algorithms to their probabilistic counterparts, on a number of real and synthetic
data sets.
Related Work. In the parametric setting (i.e., the standard HMM), several algorithms exist for maximum likelihood (ML) estimation, such as the Baum-Welch algorithm (a special instance of the EM
algorithm) and the segmental k-means algorithm [9]. Infinite-state HMMs [3, 12] are nonparametric
Bayesian extensions of the finite HMMs where hierarchical Dirichlet process (HDP) priors are used
to allow for an unspecified number of states. Exact inference in this model is intractable, so one
typically resorts to sampling methods. The standard Gibbs sampling methods [12] are notoriously
slow to converge and cannot exploit the forward-backward structure of the HMMs. [6] presents a
Beam sampling method which bypasses this obstacle via slice sampling, where only a finite number
of hidden states are considered in each iteration. However, this approach is still computationally intensive since it works in the non-collapsed space. Thus infinite-state HMMs, while desirable from a
modeling perspective, have been limited by their inability to scale to large data sets?this is precisely
the situation in which small-variance asymptotics has the potential to be beneficial.
Connections between the mixture of Gaussians model and k-means are widely known. Beyond
the references discussed earlier, we note that a similar connection relating probabilistic PCA and
standard PCA was discussed in [13, 10], as well as connections between support vector machines
and a restricted Bayes optimal classifier in [14].
2
Asymptotics of the finite-state HMM
We begin, as a warm-up, with the simpler parametric (finite-state) HMM model, and show that
small-variance asymptotics on the joint log likelihood yields a penalized k-means objective, and on
the EM algorithm yields a generalized segmental k-means algorithm. The tools developed in this
section will then be used for the more involved nonparametric model.
2.1
The Model
The Hidden Markov Model assumes a hidden state sequence Z = {z1 , . . . , zN } drawn from a finite
discrete state space {1, . . . , K}, coupled with the observation sequence X = {x1 , . . . , xN }. The
resulting generative model defines a probability distribution over the hidden state sequence Z and
the observation sequence X . Let T ? RK?K be the stationary transition probability matrix of the
hidden state sequence with Ti. = ?i ? RK being a distribution over the latent states. For clarity
of presentation, we will use a binary 1-of-K representation for the latent state assignments. That
is, we will write the event of the latent state at time step t being k as ztk = 1 and ztl = 0 ?l =
1...K, l 6= k. Then the transition probabilities can be written as Tij = Pr(ztj = 1|zt?1,i = 1). The
initial state distribution is ?0 ? RK . The Markov structure dictates that zt ? Mult(?zt?1 ), and the
observations follow xt ? ?(?zt ). The observation density ? is assumed invariant, and the Markov
structure induces conditional independence of the observations given the latent states.
In the following, we present the asymptotic treament for the finite HMM with Gaussian emission densities Pr(xt |ztk = 1) = N (xt |?k , ? 2 Id ). Here ?zt = ?k , since the parameter space
? contains only the emission means. Generalization to exponential family emission densities is
straightforward[7]. At a high level, the connection we seek to establish can be proven in two ways.
The first approach is to examine small-variance asymptotics directly on the the joint probability distribution of the HMM, as done in [5] for clustering and feature learning problems. We will primarily
2
focus on this approach, since our ideas can be more clearly expressed by this technique, and it is
independent of any inference algorithm. The other approach is to analyze the behavior of the EM
algorithm as the variance goes to zero. We will briefly discuss this approach as well, but for further
details the interested reader can consult the supplementary material.
2.1.1
Exponential Family Transitions
Our main analysis relies on appropriate manipulation of the transition probabilities, where we will
use the bijection between exponential families and Bregman divergences established in [2]. Since
the conditional distribution of the latent state at any time step is multinomial in the transition probabilities from the previous state, we use the aforementioned bijection to refactor the transition probabilities in the joint distribution of the HMM into a form than utilizes Bregman divergences. This,
with an appropriate scaling to enable small-variance asymptotics as mentioned in [7], allows us to
combine the emission and transition distributions into a simple objective function.
We denote Tjk = Pr(ztk = 1|zt?1,j = 1) as before, and the multinomial distribution for the latent
state at time step t as
Pr(zt |zt?1,j = 1) =
K
Y
ztk
Tjk
.
(1)
k=1
In order to apply small-variance asymptotics, we must allow the variance in the transition probabilities to go to zero in a reasonable way. Following the treatment in [2], we can rewrite this distribution
in a suitable exponential family notation, which we then express in the following equivalent form:
Pr(zt |zt?1,j = 1) = exp(?d? (zt , mj ))b? (zt ),
(2)
PK
where the Bregman divergence d? (zt , mj ) =
k=1 ztk log (ztk /Tjk ) = KL(zt , mj ), mj =
{Tjk }K
k=1 and b? (zt ) = 1. See the supplementary notes for derivation details. The prime motivation for using this form is that we can appropriately scale the variance of the exponential family
? and
distribution following Lemma 3.1 of [7]. In particular, if we introduce a new parameter ?,
generalize the transition probabilities to be
? ? (zt , mj ))b ?(zt ),
Pr(zt |zt?1,j = 1) = exp(??d
?
? then the mean of the distribution is the same in the scaled distribution (namely, mj )
where ?? = ??,
while the variance is scaled. As ?? ? ?, the variance goes to zero.
The next step is to link the emission and transition probabilities so that the variance is scaled appropriately in both. In particular, we will define ? = 1/2? 2 and then let ?? = ?? for some ?. The
Gaussian emission densities can now be written as Pr(xt |ztk = 1) = exp(??kxt ? ?k k22 )f (?) and
the transition probabilities as Pr(zt |zt?1,j = 1) = exp(???d? (zt , mj ))b??(zt ). See [7] for further
details about the scaling operation.
2.1.2
Joint Probability Asymptotics
We now have all the background development required to perform small-variance asymptotics on
the HMM joint probability, and derive the segmental k-means algorithm. Our parameters of interest
are the Z = [z1 , ..., zN ] vectors, the ? = [?1 , ..., ?K ] means, and the transition parameter matrix
T . We can write down the joint likelihood by taking a product of all the probabilities in the model:
p(X , Z) = p(z1 )
N
Y
p(zt |zt?1 )
t=2
N
Y
N (xt |?zt , ? 2 Id ),
t=1
With some abuse of notation, let mzt?1 denote the mean transition vector given by the assignment
zt?1 (that is, if zt?1,j = 1 then mzt?1 = mj ). The exp-family probabilities above allow us to
rewrite this joint likelihood as
"
!
#
N
N
X
X
2
p(X , Z) ? exp ??
kxt ? ?zt k2 + ?
KL(zt , mzt?1 ) + log p(z1 ) .
(3)
t=1
t=2
3
To obtain the corresponding non-probabilistic objective from small-variance asymptotics, we consider the MAP estimate obtained by maximizing the joint likelihood with respect to the parameters
asymptotically as ? 2 goes to zero (? goes to ?). In our case, it is particularly simple given the joint
likelihood above. The log-likelihood easily yields the following asymptotically:
!
N
N
X
X
2
max ?
kxt ? ?zt k2 + ?
KL(zt , mzt?1 )
(4)
Z,?,T
t=1
t=2
or equivalently,
min
Z,?,T
N
X
kxt ?
?zt k22
+?
t=1
N
X
!
KL(zt , mzt?1 ) .
(5)
t=2
K
Note that, as mentioned above, mj = Tjk k=1 . We can view the above objective function as a
?penalized? k-means problem, where the penalties are given by the transitions from state to state.
One possible strategy to minimize (5) would be to iteratively minimize with respect to each of the
individual parameters (Z, ?, T ) keeping the other two fixed. When fixing ? and T , and taking ? = 1,
the solution for Z in (4) is identical to the MAP update on the latent variables Z for this model, as
in a standard HMM. When ? 6= 1, a simple generalization of the standard forward-backward routine
can be used to find the optimal assignments. Keeping Z and T fixed, the update on ?k is easily seen
to be the equiweighted average of the data points which have been assigned to latent state k in the
MAP estimate (it is the same minimization as in k-means for updating cluster means). Finally, since
K
P
KL(zt , mj ) ? ?
log Tjk , minimization with respect to T simply yields the empirical transition
k=1
from state j to k
probabilities, that is Tjk,new = ##of oftransitions
transitions from state j , both counts from the MAP path computed
during maximization w.r.t Z. We observe that, when ? = 1, the iterative minimization algorithm to
solve (5) is exactly the segmental k-means algorithm, also known as Viterbi re-estimation.
2.1.3
EM algorithm asymptotics
We can reach the same algorithm alternatively by writing down the steps of the EM algorithm and
exploring the small-variance limit of these steps, analogous to the approach of [8] for a Dirichlet
process mixture. Given space limitations (and the fact that the resulting algorithm is identical, as
expected), a more detailed discussion can be found in the supplementary material.
3
Asymptotics of the Infinite Hidden Markov Model
We now tackle the more complex nonparametric model. We will derive the objective function directly as in the parametric case but, unlike the parametric version, we will not apply asymptotics to
the existing sampler algorithms. Instead, we will present a new algorithm to optimize our derived
objective function. By deriving an algorithm directly, we ensure that our method takes advantage of
dynamic programming, unlike the standard sampler.
3.1
The Model
The iHMM, also known as the HDP-HMM [3, 12] is a nonparametric Bayesian extension to the
HMM, where an HDP prior is used to allow for an unspecified number of states. The HDP is a
set of Dirichlet Processes (DPs) with a shared base distribution, that are themselves drawn from a
Dirichlet process [12]. Formally, we can write Gk ? DP(?, G0 ) with a shared base distribution
G0 ? DP(?, H), where H is the global base distribution that permits sharing of probability mass
across Gk s. ? and ? are the concentration parameters for the Gk and G0 measures, respectively.
To apply HDPs to sequential data, the iHMM can be formulated as follows:
? ? GEM(?),
?k |? ? DP(?, ?),
zt |zt?1 ? Mult(?zt?1 ),
?k ? H,
xt ? ?(?zt ).
For a full Bayesian treatment, Gamma priors are placed on the concentration parameters (though we
will not employ such priors in our asymptotic analysis).
4
Following the discussion in the parametric case, our goal is to write down the full joint likelihood
in the above model. As discussed in [12], the Hierarchical Dirichlet Process yields assignments that
follow the Chinese Restaurant Franchise (CRF), and thus the iHMM model additionally incorporates
a term in the joint likelihood involving the prior probability of a set of state assignments arising
from the CRF. Suppose an assignment of observations to states has K different states (i.e., number
of restaurants in the franchise), si is the number of states that can be reached from state i in one step
(i.e., number of tables in each restaurant i), and ni is the number observations in each state i (i.e.,
number of customers in each restaurant). Then the probability of an assignment in the HDP can be
written as (after integrating out mixture weights [1, 11], and if we only consider terms that would
not be constants after we do the asymptotic analysis [5]):
p(Z|?, ?, ?) ? ? K?1
K
Y
?(? + 1)
?(? + 1)
.
?sk ?1
PK
?(? + ni )
?(? + k=1 sk ) k=1
For the likelihood, we follow the same assumption as in the parametric case: the observation densities are Gaussians with a shared covariance matrix ? 2 Id . Further, the means are drawn independently from the prior N (0, ?2 Id ), where ?2 > 0 (this is needed, as the model is fully Bayesian now).
QK
Therefore, p(?1:K ) = k=1 N (?k |0, ?2 Id ), and
p(X , Z) ? p(Z|?, ?, ?) ? p(z1 )
N
Y
p(zt |zt?1 ) ?
t=2
N
Y
N (xt |?zt , ? 2 Id ) ? p(?1:K ).
t=1
Now, we can perform the small-variance analysis on the generative model. In order to retain the
impact of the hyperparameters ? and ? in the asymptotics, we can choose some constants ?1 , ?2 > 0
such that
? = exp(??1 ?), ? = exp(??2 ?),
2
where ? = 1/(2? ) as before. Note that, in this way, we have ? ? 0 and ? ? 0 when ? ? ?.
We now can consider the objective function for maximizing the generative probability as we let
? ? ?. This gives p(X , Z) ?
!
N
N
K
h
X
X
X
2
exp ? ?
kxt ? ?zt k + ?
KL(zt , mzt?1 ) + ?1
(sk ? 1) + ?2 (K ? 1)
t=1
t=2
k=1
i
+ log(p(z1 )) .
(6)
Therefore, maximizing the generative probability is asymptotically equivalent to the following optimization problem:
min
K,Z,?,T
N
X
t=1
kxt ? ?zt k2 + ?
N
X
KL(zt , mzt?1 ) + ?1
t=2
K
X
(sk ? 1) + ?2 (K ? 1).
(7)
k=1
In words, this objective seeks to minimize a penalized k-means objective, with three penalties. The
first is the same as in the parametric case?a penalty based on the transitions from state to state. The
second penalty penalizes the number of transitions out of each state, and the third penalty penalizes
the total number of states. Note this is similar to the objective function derived in [8] for the HDP,
but here there is no dependence on any particular samplers. This can also be considered as MAP
estimation of the parameters, since p(Z, ?|X ) ? p(X |Z)p(Z)p(?).
3.2 Algorithm
The algorithm presented in [8] could be almost directly applied to (7) but it neglects the sequential
characteristics of the model. Instead, we present a new algorithm to directly optimize (7). We follow
the alternating minimization framework as in the parametric case, with some slight tweaks. Specifically, given observations {x1 , . . . , xN }, ?, ?1 , ?2 , our high-level algorithm proceeds as follows:
(1) Initialization: initialize with one hidden state. The parameters are therefore K = 1, ?1 =
PN
1
i=1 xi , T = 1.
N
(2) Perform a forward-backward step (via approximate dynamic programming) to update Z.
5
(3) Update K, ?, T .
(4) For each state i, (i = 1, . . . , K), check if the set of observations to any state j that are reached
by transitioning out of i can form a new dedicated hidden state and lower the objective function
in the process. If there are several such moves, choose the one with the maximum improvement
in objective function.
(5) Update K, ?, T .
(6) Iterate steps (2)-(5) until convergence.
There are two key changes to the algorithm beyond the standard parametric case. In the forwardbackward routine (step 2), we compute the usual K ? N matrix ?, where ?(c, t) represents the
minimum cost over paths of length t from the beginning of the sequence and that reach state c at
time step t. We use the term ?cost? to refer to the sum of the distances of points to state means,
as well as the additive penalties incurred. However, to see why it is difficult to compute the exact
value of ? in the nonparametric case, suppose we have computed the minimum cost of paths up to
step t ? 1 and we would like to compute the values of ? for step t. The cost of a path that ends
in state c is obtained by examining, for all states i, the cost of a path that ends at i at step t ? 1
and then transitions to state c at step t. Thus, we must consider the transition from i to c. If there
are existing transitions from state i to state c, then we proceed as in a standard forward-backward
algorithm. However, we are also interested in two other cases?one where there are no existing
transitions from i to c but we consider this transition along with a penalty ?1 , and another where
an entirely new state is formed and we pay a penalty ?2 . In the first case, the standard forwardbackward routine faces an immediate problem, since when we try to compute the cost of the path
given by ?(c, t), the cost will be infinite as there is a ? log(0) term from the transition probability.
We must therefore alter the forward-backward routine, or there will never be new states created nor
transitions to an existing state which previously had no transitions. The main idea is to derive and
use bounds on how much the transition matrix can change under the above scenarios. As long as
we can show that the values we obtain for ? are upper bounds, then we can show that the objective
function will decrease after the forward-backward routine, as the existing sequence of states is also
a valid path (with no new incurred penalties).
The second change (step 4) is that we adopt a ?local move? analogous to that described for the hard
HDP in [8]. This step determines if the objective will decrease if we create a new global state in
a certain fashion; in particular, for each existing state j, we compute the change in objective that
occurs when data points that transition from j to some state k are given their own new global state.
By construction this step decreases the objective.
Due to space constraints, full details of the algorithm, along with a local convergence proof, are
provided in the supplementary material (section B).
4
Experiments
We conclude with a brief set of experiments designed to highlight the benefits of our approach.
Namely, we will show that our methods have benefits over the existing parametric and nonparametric HMM algorithms in terms of speed and accuracy.
Synthetic Data. First we compare our nonparametric algorithm with the Beam Sampler for the
iHMM1 . A sequence of length 3000 was generated over a varying number of hidden states with
the all-zeros transition matrix except that Ti,i+1 = 0.8 and Ti,i+2 = 0.2 (when i + 1 > K, the
total number of states, we choose j = i + 1 mod K and let Ti,j = 0.8, and similarly for i + 2).
Observations were sampled from symmetric Gaussian distributions with means of {3, 6, . . . , 3K}
and a variance of 0.9.
The data described above were trained using our nonparametric algorithm (asymp-iHMM) and the
Beam sampler. For our nonparametric algorithm, we performed a grid search over all three parameters and selected the parameters using a heuristic (see the supplementary material for a discussion
of this heuristic). For the Beam sampling algorithm, we used the following hyperparameter settings:
gamma hyperpriors (4, 1) for ?, (3, 6) for ?, and a zero mean normal distribution for the base H
with the variance equal to 10% of the empirical variance of the dataset. We also normalized the
sequence to have zero mean. The number of selected samples was varied among 10, 100, and 1000
1
http://mloss.org/software/view/205/
6
for different numbers of states, with 5 iterations between two samples. (Note: there are no burn-in
iterations and all samplers are initialized with a randomly initialized 20-state labeling.)
Training Time
NMI Score
0.4 0.6
Log of the time
2 3 4 5
0.8
6
7
1.0
Training Accuracy
2
4
6
# of states
8
1
AsymIhmm
Beam10
Beam100
Beam1000
0
0.0
0.2
AsymIhmm
Beam10
Beam100
Beam1000
10
2
4
6
# of states
8
10
Figure 1: Our algorithm (asymp-iHMM) vs. the Beam Sampler on the synthetic Gaussian hidden
Markov model data. (Left) The training accuracy; (Right) The training time on a log-scale.
In Figure 1 (best viewed in color), the training accuracy and running time for the two algorithms
are shown, respectively. The accuracy of the Beam sampler is given by the highest among all the
samples selected. The accuracy is shown in terms of the normalized mutual information (NMI) score
(in the range of [0,1]), since the sampler may output different number of states than the ground truth
and NMI can handle this situation. We can see that, in all datasets, our algorithm performs better
than the sampling method in terms of accuracy, but with running time similar to the sampler with
only 10 samples. For these datasets, we also observe that the EM algorithm for the standard HMM
(not reported in the figure) can easily output a smaller number of states than the ground truth, which
yields a smaller NMI score. We also observed that the Beam sampler is highly sensitive to the
initialization of hyperparameters. Putting flat priors over the hyperparameters can ameliorate the
situation, but also substantially increases the number of samples required.
Next we demonstrate the effect of the compensation parameter ? in the parametric asymptotic model,
along with comparisons to the standard HMM. We will call the generalized segmental k-means of
Section 2 the ?asymp-HMM? algorithm, shortened to ?AHMM? as appropriate. For this experiment,
we used univariate Gaussians with means at 3, 6, and 10, and standard deviation of 2.9. In our
ground-truth transition kernel, state i had an 80% prob. of transitioning to state i + 1, and 10% prob.
of transitioning to each of the other states. 5000 datapoints were generated from this model. The
first 40% of the data was used for training, and the remaining 60% for prediction. The means in
both the standard HMM and the asymp-HMM were initialized by the centroids learned by k-means
from the training data. The transition kernels were initialized randomly. Each algorithm was run 50
times; the averaged results are shown in Figure 2.
Figure 2 shows the effect of ? on accuracy
as measured by NMI and scaled prediction error. We see the expected tradeoff: for small ?,
the problem essentially reduces to standard kmeans, whereas for large ? the observations are
essentially ignored. For ? = 1, corresponding to standard segmental k-means, we obtain
results similar to the standard HMM, which obtains an NMI of .57 and error of 1.16. Thus, the
parametric method offers some added flexibility via the new ? parameter.
Financial time-series prediction. Our next ex- Figure 2: NMI and prediction error as a function
periment illustrates the advantages of our algo- of the compensation parameter ?
rithms in a financial prediction problem. The
sequence consists of 3668 values of the Standard & Poor?s 500 index on consecutive trading days
7
Figure 3: Predicted values of the S&P 500 index from 12/29/1999 to 07/30/2012 returned by the
asymp-HMM, asymp-iHMM and the standard HMM algorithms, with the true index values for that
period (better in color); see text for details.
from Jan 02, 1998 to July 30, 20122 . The index exhibited appreciable variability in this period, with
both bull and bear runs. The goal here was to predict the index value on a test sequence of trading
days, and compare the accuracies and runtimes of the algorithms.
To prevent overfitting, we used a training window of length 500. This window size was empirically
chosen to provide a balance between prediction accuracy and runtime. The algorithms were trained
on the sequence from index i to i + 499, and then the i + 500-th value was predicted and compared
with the actual recorded value at that point in the sequence. i ranged from 1 to 3168. As before, the
mixture means were initialized with k-means and the transition kernels were given random initial
values. For the asymp-HMM and the standard HMM, the number of latent states was empirically
chosen to be 5. For the asymp-iHMM, we tune the parameters to get also 5 states on average. For
predicting observation T + 1 given observations up to step T , we used the weighted average of the
learned state means, weighted by the transition probabilities given by the state of the observation at
time T .
We ran the standard HMM along with both the parametric and non-parametric asymptotic algorithms
on this data (the Beam sampler was too slow to run over this data, as each individual prediction took
on the order of minutes). The values predicted from time step 501 to 3668 are plotted with the true
index values in that time range in Figure 3. Both the parametric and non-parametric asymptotic
algorithms perform noticably better than the standard HMM; they are able to better approximate the
actual curve across all kinds of temporal fluctuations. Indeed, the difference is most stark in the
areas of high-frequency oscillations. While the standard HMM returns an averaged-out prediction,
our algorithms latch onto the underlying behavior almost immediately and return noticably more
accurate predictions. Prediction accuracy has been measured using the mean absolute percentage
(MAP) error, which is the mean of the absolute differences of the predicted and true values expressed
as percentages of the true values. The MAP error for the HMM was 6.44%, that for the asymp-HMM
was 3.16%, and that for the asymp-iHMM was 2.78%. This confirms our visual perception of the
asymp-iHMM algorithm returning the best-fitted prediction in Figure 3.
Additional Real-World Results. We also compared our methods on a well-log data set that was
used for testing the Beam sampler. Due to space constraints, further discussion of these results is
included in the supplementary material.
5
Conclusion
This paper considered an asymptotic treatment of the HMM and iHMM. The goal was to obtain
non-probabilistic formulations inspired by the HMM, in order to expand small-variance asymptotics
to sequential models. We view our main contribution as a novel dynamic-programming-based algorithm for sequential data with a non-fixed number of states that is derived from the iHMM model.
Acknowledgements
This work was supported by NSF award IIS-1217433.
2
http://research.stlouisfed.org/fred2/series/SP500/downloaddata
8
References
[1] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric
problems. The Annals of Statistics, 2(6):1152?1174, 1974.
[2] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences.
Journal of Machine Learning Research, 6:1705?1749, 2005.
[3] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In
Advances in neural information processing systems, 2002.
[4] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[5] T. Broderick, B. Kulis, and M. I. Jordan. MAD-Bayes: MAP-based asymptotic derivations
from Bayes. In Proceedings of the 30th International Conference on Machine Learning, 2013.
[6] J. V. Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden
Markov model. In Proceedings of the 25th International Conference on Machine Learning,
2008.
[7] K. Jiang, B. Kulis, and M. I. Jordan. Small-variance asymptotics for exponential family Dirichlet process mixture models. In Advances in Neural Information Processing Systems, 2012.
[8] B. Kulis and M. I. Jordan. Revisiting k-means: New algorithms via Bayesian nonparametrics.
In Proceedings of the 29th International Conference on Machine Learning, 2012.
[9] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257?286, 1989.
[10] S. Roweis. EM algorithms for PCA and SPCA. In Advances in Neural Information Processing
Systems, 1998.
[11] E. Sudderth. Toward reliable Bayesian nonparametric learning. In NIPS Workshop on Bayesian
Noparametric Models for Reliable Planning and Decision-Making Under Uncertainty, 2012.
[12] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal
of the American Statistical Association, 101(476):1566?1581, 2006.
[13] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of Royal
Statistical Society, Series B, 21(3):611?622, 1999.
[14] S. Tong and D. Koller. Restricted Bayes optimal classifiers. In Proc. 17th AAAI Conference,
2000.
9
| 4913 |@word kulis:5 version:1 briefly:1 confirms:1 seek:2 covariance:1 initial:2 contains:1 score:3 series:3 existing:8 comparing:1 si:1 written:3 must:3 additive:1 designed:1 update:5 v:1 stationary:1 generative:4 selected:4 beginning:1 blei:1 provides:1 cse:1 bijection:2 org:2 simpler:1 along:6 beta:1 consists:1 combine:2 introduce:1 indeed:1 expected:2 behavior:2 themselves:1 nor:2 examine:1 planning:1 inspired:1 actual:2 window:2 begin:2 provided:1 underlying:2 notation:2 mass:1 kind:1 unspecified:2 substantially:1 emerging:1 developed:1 ghosh:1 temporal:1 ti:4 tackle:1 runtime:1 exactly:1 returning:1 classifier:2 scaled:4 k2:3 before:3 engineering:1 local:2 limit:2 shortened:1 id:6 jiang:2 path:7 fluctuation:1 abuse:1 burn:1 initialization:3 hmms:6 limited:1 range:2 averaged:2 testing:1 jan:1 asymptotics:22 area:1 empirical:2 mult:2 dictate:1 word:1 integrating:1 get:1 cannot:1 onto:1 collapsed:1 applying:1 writing:1 optimize:2 equivalent:2 map:8 customer:1 baum:1 maximizing:3 straightforward:1 go:5 starting:1 independently:1 ke:1 welch:1 immediately:1 deriving:1 datapoints:1 financial:2 handle:1 analogous:3 annals:1 construction:1 suppose:2 heavily:1 exact:2 programming:4 designing:1 recognition:3 particularly:3 updating:1 observed:1 capture:2 revisiting:1 decrease:4 highest:1 forwardbackward:2 ran:1 mentioned:2 broderick:1 dynamic:4 trained:2 rewrite:2 algo:1 sp500:1 easily:3 joint:11 various:1 derivation:2 describe:1 labeling:1 heuristic:2 widely:2 supplementary:6 solve:1 statistic:1 beal:2 sequence:17 advantage:3 kxt:6 took:1 product:1 flexibility:1 roweis:1 scalability:2 convergence:2 cluster:1 leave:1 franchise:2 derive:5 depending:1 develop:1 fixing:1 measured:2 predicted:4 trading:2 enable:1 material:5 generalization:2 brian:1 extension:3 exploring:1 considered:4 ground:3 normal:1 exp:9 viterbi:1 predict:1 tjk:7 adopt:1 consecutive:1 estimation:3 proc:1 combinatorial:4 sensitive:1 create:1 tool:1 weighted:2 minimization:4 clearly:1 gaussian:6 pn:1 varying:1 derived:3 emission:6 focus:1 improvement:1 likelihood:10 check:1 centroid:1 inference:6 dependent:1 typically:1 hidden:18 koller:1 expand:1 interested:2 aforementioned:1 flexible:1 among:2 development:1 special:2 fairly:1 initialize:1 mutual:1 equal:1 never:1 sampling:8 runtimes:1 identical:2 represents:1 alter:1 primarily:1 employ:1 modern:1 randomly:2 gamma:2 divergence:4 individual:2 saatci:1 interest:1 highly:1 ztj:1 mixture:10 yielding:1 accurate:1 bregman:4 asymp:11 penalizes:2 re:1 initialized:5 plotted:1 fitted:1 increased:1 instance:1 modeling:1 obstacle:1 earlier:1 zn:2 assignment:7 maximization:1 bull:1 cost:7 tweak:1 deviation:1 examining:1 too:1 reported:1 synthetic:4 density:5 international:3 retain:1 probabilistic:13 connecting:1 aaai:1 recorded:1 choose:3 resort:1 american:1 return:2 stark:1 potential:1 performed:1 view:3 try:1 analyze:1 reached:2 bayes:4 contribution:1 minimize:3 formed:1 ni:2 accuracy:11 variance:30 qk:1 characteristic:1 merugu:1 yield:8 rabiner:1 generalize:1 bayesian:10 notoriously:1 reach:3 sharing:1 ihmm:12 frequency:1 involved:1 proof:1 rithms:1 sampled:1 dataset:1 treatment:3 knowledge:1 color:2 routine:5 tipping:1 day:2 follow:4 refactor:1 formulation:2 done:1 though:1 nonparametrics:1 until:1 banerjee:1 lack:1 defines:1 effect:2 k22:2 normalized:2 true:4 ranged:1 counterpart:3 assigned:1 alternating:1 symmetric:1 iteratively:1 dhillon:1 latch:1 during:1 essence:1 generalized:2 crf:2 demonstrate:2 performs:1 dedicated:1 variational:1 ohio:2 recently:1 novel:1 multinomial:2 empirically:2 discussed:3 slight:1 association:1 relating:1 refer:1 mzt:7 gibbs:1 grid:1 similarly:1 language:1 had:2 reachable:1 ztk:7 base:4 segmental:7 own:1 recent:1 perspective:1 prime:1 manipulation:1 scenario:1 certain:1 binary:1 seen:1 minimum:2 additional:1 converge:1 period:2 monotonically:1 july:1 ii:1 full:3 desirable:1 reduces:1 offer:1 long:1 award:1 impact:1 prediction:11 scalable:4 involving:1 essentially:2 iteration:3 kernel:3 beam:10 background:1 whereas:1 sudderth:1 appropriately:2 unlike:3 exhibited:1 tend:2 incorporates:1 mod:1 jordan:4 call:1 consult:1 spca:1 iterate:1 independence:1 restaurant:4 idea:2 tradeoff:1 intensive:1 thread:1 motivated:1 pca:3 penalty:12 returned:1 speech:2 proceed:1 ignored:1 tij:1 gael:1 detailed:1 involve:1 mloss:1 tune:1 nonparametric:17 induces:1 dna:1 http:2 exist:1 percentage:2 nsf:1 tutorial:1 roychowdhury:2 arising:2 hdps:1 noticably:2 diverse:1 discrete:4 write:4 hyperparameter:1 express:1 key:3 putting:1 drawn:3 clarity:1 prevent:1 backward:6 asymptotically:3 sum:1 run:3 prob:2 uncertainty:1 ameliorate:1 family:7 reader:1 reasonable:1 almost:2 utilizes:1 oscillation:1 decision:1 scaling:2 entirely:1 bound:2 pay:1 precisely:1 constraint:2 software:1 flat:1 aspect:1 speed:1 min:2 department:1 according:1 poor:1 anirban:1 beneficial:1 across:2 em:7 nmi:7 smaller:2 evolves:1 making:1 restricted:2 pr:8 invariant:1 computationally:1 remains:1 previously:1 discus:1 count:1 needed:1 end:2 gaussians:6 operation:1 permit:1 apply:3 observe:2 hierarchical:5 hyperpriors:1 appropriate:3 assumes:1 dirichlet:11 clustering:2 ensure:1 running:2 graphical:3 remaining:1 neglect:1 exploit:1 ghahramani:2 build:1 establish:1 chinese:1 society:1 objective:21 g0:3 move:2 added:1 occurs:1 parametric:18 strategy:1 concentration:2 dependence:1 usual:1 dp:4 distance:1 link:2 hmm:34 toward:1 mad:1 hdp:7 length:3 index:7 providing:1 balance:1 equivalently:1 difficult:1 gk:3 zt:46 perform:4 teh:2 upper:1 observation:18 markov:12 datasets:2 finite:6 compensation:2 immediate:1 situation:3 extended:2 variability:1 varied:1 namely:2 required:2 kl:7 connection:5 z1:6 learned:2 established:1 nip:1 beyond:3 able:1 proceeds:1 perception:1 pattern:1 challenge:1 including:1 max:1 reliable:2 royal:1 event:1 suitable:1 natural:1 warm:1 predicting:1 brief:1 created:1 coupled:1 text:1 prior:7 acknowledgement:1 asymptotic:9 fully:1 highlight:1 bear:1 limitation:1 proven:1 incurred:2 bypass:1 penalized:4 placed:1 supported:1 keeping:2 rasmussen:1 allow:4 taking:2 face:1 absolute:2 benefit:4 slice:1 curve:1 xn:2 transition:36 valid:1 rich:3 world:1 forward:6 far:1 approximate:2 obtains:1 ml:1 global:3 overfitting:1 assumed:1 gem:1 conclude:1 xi:1 ahmm:1 alternatively:1 search:1 latent:11 iterative:1 sk:4 why:1 table:1 additionally:1 mj:10 obtaining:1 complex:1 pk:2 main:3 big:1 motivation:1 hyperparameters:3 x1:2 periment:1 fashion:1 slow:2 tong:1 exponential:6 third:1 rk:3 down:3 transitioning:3 minute:1 xt:7 bishop:2 consist:1 intractable:1 workshop:1 sequential:6 effectively:1 illustrates:1 simply:1 univariate:1 antoniak:1 visual:1 expressed:2 springer:1 truth:3 determines:1 relies:1 conditional:2 goal:3 presentation:1 formulated:1 viewed:1 kmeans:1 appreciable:1 shared:3 hard:2 change:4 included:1 infinite:9 specifically:1 except:1 sampler:14 lemma:1 principal:1 total:3 osu:1 formally:1 support:1 inability:1 arises:1 ex:1 |
4,325 | 4,914 | The Total Variation on Hypergraphs - Learning on
Hypergraphs Revisited
Matthias Hein, Simon Setzer, Leonardo Jost and Syama Sundar Rangapuram
Department of Computer Science
Saarland University
Abstract
Hypergraphs allow one to encode higher-order relationships in data and are thus a
very flexible modeling tool. Current learning methods are either based on approximations of the hypergraphs via graphs or on tensor methods which are only applicable under special conditions. In this paper, we present a new learning framework
on hypergraphs which fully uses the hypergraph structure. The key element is a
family of regularization functionals based on the total variation on hypergraphs.
1
Introduction
Graph-based learning is by now well established in machine learning and is the standard way to deal
with data that encode pairwise relationships. Hypergraphs are a natural extension of graphs which
allow to model also higher-order relations in data. It has been recognized in several application
areas such as computer vision [1, 2], bioinformatics [3, 4] and information retrieval [5, 6] that such
higher-order relations are available and help to improve the learning performance.
Current approaches in hypergraph-based learning can be divided into two categories. The first one
uses tensor methods for clustering as the higher-order extension of matrix (spectral) methods for
graphs [7, 8, 9]. While tensor methods are mathematically quite appealing, they are limited to socalled k-uniform hypergraphs, that is, each hyperedge contains exactly k vertices. Thus, they are not
able to model mixed higher-order relationships. The second main approach can deal with arbitrary
hypergraphs [10, 11]. The basic idea of this line of work is to approximate the hypergraph via a standard weighted graph. In a second step, one then uses methods developed for graph-based clustering
and semi-supervised learning. The two main ways of approximating the hypergraph by a standard
graph are the clique and the star expansion which were compared in [12]. One can summarize [12]
by stating that no approximation fully encodes the hypergraph structure. Earlier, [13] have proven
that an exact representation of the hypergraph via a graph retaining its cut properties is impossible.
In this paper, we overcome the limitations of both existing approaches. For both clustering and semisupervised learning the key element, either explicitly or implicitly, is the cut functional. Our aim is
to directly work with the cut defined on the hypergraph. We discuss in detail the differences of the
hypergraph cut and the cut induced by the clique and star expansion in Section 2.1. Then, in Section
2.2, we introduce the total variation on a hypergraph as the Lovasz extension of the hypergraph
cut. Based on this, we propose a family of regularization functionals which interpolate between
the total variation and a regularization functional enforcing smoother functions on the hypergraph
corresponding to Laplacian-type regularization on graphs. They are the key for the semi-supervised
learning method introduced in Section 3. In Section 4, we show in line of recent research [14, 15, 16,
17] that there exists a tight relaxation of the normalized hypergraph cut. In both learning problems,
convex optimization problems have to be solved for which we derive scalable methods in Section
5. The main ingredients of these algorithms are proximal mappings for which we provide a novel
algorithm and analyze its complexity. In the experimental section 6, we show that fully incorporating
hypergraph structure is beneficial. All proofs are moved to the supplementary material.
1
2
The Total Variation on Hypergraphs
A large class of graph-based algorithms in semi-supervised learning and clustering is based either
explicitly or implicitly on the cut. Thus, we discuss first in Section 2.1 the hypergraph cut and the
corresponding approximations.In Section 2.2, we introduce in analogy to graphs, the total variation
on hypergraphs as the Lovasz extension of the hypergraph cut.
2.1
Hypergraphs, Graphs and Cuts
Hypergraphs allow modeling relations which are not only pairwise as in graphs but involve multiple
vertices. In this paper, we consider weighted undirected hypergraphs H = (V, E, w) where V is
the vertex set with |V | = n and E the set of hyperedges with |E| = m. Each hyperedge e ? E
corresponds to a subset of vertices, i.e., to an element of 2V . The vector w ? Rm contains for
each hyperedge e its non-negative weight we . In the following, we use
the letter H also for the
1 if i ? e,
. The degree
incidence matrix H ? R|V |?|E| which is for i ? V and e ? E, Hi,e =
0 else.
P
of a vertexPi ? V is defined as di = e?E we Hi,e and the cardinality of an edge e can be written
as |e| = j?V Hj,e . We would like to emphasize that we do not impose the restriction that the
hypergraph is k-uniform, i.e., that each hyperedge contains exactly k vertices.
The considered class of hypergraphs contains the set of undirected, weighted graphs which is equivalent to the set of 2-uniform hypergraphs. The motivation for the total variation on hypergraphs
comes from the correspondence between the cut on a graph and the total variation functional. Thus,
we recall the definition of the cut on weighted graphs G = (V, W ) with weight matrix W . Let
C = V \C denote the complement of C in V . Then, for a partition (C, C), the cut is defined as
X
cutG (C, C) =
wij .
i,j : i?C,j?C
This standard definition of the cut carries over naturally to a hypergraph H
X
cutH (C, C) =
we .
e?E:
(1)
e?C6=?, e?C6=?
Thus, the cut functional on a hypergraph is just the sum of the weights of the hyperedges which have
vertices both in C and C. It is not biased towards a particular way the hyperedge is cut, that is, how
many vertices of the hyperedge are in C resp. C. This emphasizes that the vertices in a hyperedge
belong together and we penalize every cut of a hyperedge with the same value.
In order to handle hypergraphs with existing methods developed for graphs, the focus in previous
works [11, 12] has been on transforming the hypergraph into a graph. In [11], they suggest using
the clique expansion (CE), i.e., every hyperedge e ? H is replaced with a fully connected subgraph
e
where every edge in this subgraph has weight w
|e| . This leads to the cut functional cutCE ,
X
we
(2)
cutCE (C, C) :=
|e ? C| |e ? C|.
e?E:
e?C6=?, e?C6=? |e|
Note that in contrast to the hypergraph cut (1), the value of cutCE depends on the way each hyperedge is cut since the term |e ? C| |e ? C| makes the weights dependent on the partition. In particular,
the smallest weight is attained if only a single vertex is split off, whereas the largest weight is attained
if the partition of the hyperedge is most balanced. In comparison to the hypergraph cut, this leads
to a bias towards cuts that favor splitting off single vertices from a hyperedge which in our point of
view is an undesired property for most applications. We illustrate this with an example in Figure
1, where the minimum hypergraph cut (cutH ) leads to a balanced partition, whereas the minimum
clique expansion cut (cutCE ) not only cuts an additional hyperedge but is also unbalanced. This is
due to its bias towards splitting off single nodes of a hyperedge. Another argument against the clique
expansion is computational complexity. For large hyperedges the clique expansion leads to (almost)
fully connected graphs which makes computations slow and is prohibitive for large hypergraphs.
We omit the discussion of the star graph approximation of hypergraphs discussed in [12] as it is
shown there that the star graph expansion is very similar to the clique expansion. Instead, we want
to recall the result of Ihler et al. [13] which states that in general there exists no graph with the same
vertex set V which has for every partition (C, C) the same cut value as the hypergraph cut.
2
Figure 1: Minimum hypergraph cut cutH vs. minimum cut of the clique expansion cutCE : For edge
weights w1 = w4 = 10, w2 = w5 = 0.1 and w3 = 0.6 the minimum hypergraph cut is (C1 , C 1 )
which is perfectly balanced. Although cutting one hyperedge more and being unbalanced, (C2 , C 2 )
is the optimal cut for the clique expansion approximation.
Finally, note that for weighted 3-uniform hypergraphs it is always possible to find a corresponding
graph such that any cut of the graph is equal to the corresponding cut of the hypergraph.
Proposition 2.1. Suppose H = (V, E, w) is a weighted 3-uniform hypergraph. Then, W ?
R|V |?|V | defined as W = 21 Hdiag(w)H T defines the weight matrix of a graph G = (V, W ) where
each cut of G has the same value as the corresponding hypergraph cut of H.
2.2
The Total Variation on Hypergraphs
In this section, we define the total variation on hypergraphs. The key technical element is the Lovasz
extension which extends a set function, seen as a mapping on 2V , to a function on R|V | .
?
Definition 2.1. Let S? : 2V ? R be a set function with S(?)
= 0. Let f ? R|V | , let V be ordered
such that f1 ? f2 ? . . . ? fn and define Ci = {j ? V | j > i}. Then, the Lovasz extension
S : R|V | ? R of S? is given by
n
n?1
X
X
? i )(fi+1 ? fi ) + f1 S(V
? i?1 ) ? S(C
? i) =
? ).
fi S(C
S(C
S(f ) =
i=1
i=1
?
Note that for the characteristic function of a set C ? V , we have S(1C ) = S(C).
It is well-known that the Lovasz extension S is a convex function if and only if S? is submodular
[18]. For graphs G = (V, W ), the total variation on graphs
Pnis defined as the Lovasz extension of the
graph cut [18] given as T VG : R|V | ? R, TVG (f ) = 21 i,j=1 wij |fi ? fj |.
Proposition 2.2. The total variation TVH : R|V | ? R on a hypergraph H = (V, E, w) defined as
?
the Lovasz extension of the hypergraph cut, S(C)
= cutH (C, C), is a convex function given by
X
X
we max |fi ? fj |.
TVH (f ) =
we max fi ? min fj =
e?E
i?e
j?e
e?E
i,j?e
Note that the total variation of a hypergraph cut reduces to the total variation on graphs if H is
2-uniform (standard graph). There is an interesting relation of the total variation on hypergraphs
to sparsity inducing group norms. Namely, defining for each edge e ? E the difference operator
|V |?|V |
De : R|V | ? RP
by (De f )ij = fi ? fj if i, j ? e and 0 otherwise, TVH can be written
as, TVH (f ) = e?E we kDe f k? , which can be seen as inducing group sparse structure on the
gradient level. The groups are the hyperedges and thus are typically overlapping. This could lead
potentially to extensions of the elastic net on graphs to hypergraphs.
It is known that using the total variation on graphs as a regularization functional in semi-supervised
learning (SSL) leads to very spiky solutions for small numbers of labeled points. Thus, one would
like to have regularization functionals enforcing more smoothness of the solutions. For graphs this
is achieved by using the family of regularization functionals ?G,p : R|V | ? R,
?G,p (f ) =
n
1 X
wij |fi ? fj |p .
2 i,j=1
3
For p = 2 we get the regularization functional of the graph Laplacian which is the basis of a large
class of methods on graphs. In analogy to graphs, we define a corresponding family on hypergraphs.
Definition 2.2. The regularization functionals ?H,p : R|V | ? R for a hypergraph H = (V, E, w)
are defined for p ? 1 as
p
X
?H,p (f ) =
we max fi ? min fj .
i?e
e?E
j?e
Lemma 2.1. The functionals ?H,p : R|V | ? R are convex.
Note that ?H,1 (f ) = TVH (f ). If H is a graph and p ? 1, ?H,p reduces to the Laplacian regularization ?G,p . Note that for characteristic functions of sets, f = 1C , it holds ?H,p (1C ) = cutH (C, C).
Thus, the difference between the hypergraph cut and its approximations such as clique and star
expansion carries over to ?H,p and ?GCE ,p , respectively.
3
Semi-supervised Learning
With the regularization functionals derived in the last section, we can immediately write down a
formulation for two-class semi-supervised learning on hypergraphs similar to the well-known approaches of [19, 20]. Given the label set L we construct the vector Y ? Rn with Yi = 0 if i ?
/ L
and Yi equal to the label in {?1, 1} if i ? L. We propose solving
f ? = arg min
f ?R|V |
1
2
kf ? Y k2 + ? ?H,p (f ),
2
(3)
where ? > 0 is the regularization parameter. In Section 5, we discuss how this convex optimization
problem can be solved efficiently for the case p = 1 and p = 2. Note, that other loss functions than
the squared loss could be used. However, the regularizer aims at contracting the function and we
use the label set {?1, 1} so that f ? ? [?1, 1]|V | . Hence, on the interval [?1, 1] the squared loss
behaves very similar to other margin-based loss functions. In general, we recommend using p = 2
as it corresponds to Laplacian-type regularization for graphs which is known to work well. For
graphs p = 1 is known to produce spiky solutions for small numbers of labeled points. This is due
to the effect that cutting ?out? the labeled points leads to a much smaller cut than, e.g., producing a
balanced partition. However, in the case where one has only a small number of hyperedges this effect
is much smaller and we will see in the experiments that p = 1 also leads to reasonable solutions.
4
Balanced Hypergraph Cuts
In Section 2.1, we discussed the difference between the hypergraph cut (1) and the graph cut of
the clique expansion (2) of the hypergraph and gave a simple example in Figure 1 where these
cuts yield quite different results. Clearly, this difference carries over to the famous normalized cut
criterion introduced in [21, 22] for clustering of graphs with applications in image segmentation.
For a hypergraph the ratio resp. normalized cut can be formulated as
RCut(C, C) =
cutH (C, C)
,
|C||C|
NCut(C, C) =
cutH (C, C)
,
vol(C) vol(C)
which incorporate different balancing criteria. Note, that in contrast to the normalized cut for graphs
the normalized hypergraph cut allows no relaxation into a linear eigenproblem (spectral relaxation).
Thus, we follow a recent line of research [14, 15, 16, 17] where it has been shown that the standard
spectral relaxation of the normalized cut used in spectral clustering [22] is loose and that a tight, in
fact exact, relaxation can be formulated in terms of a nonlinear eigenproblem. Although nonlinear
eigenproblems are non-convex, one can compute nonlinear eigenvectors quite efficiently at the price
of loosing global optimality. However, it has been shown that the potentially non-optimal solutions
of the exact relaxation, outperform in practice the globally optimal solution of the loose relaxation,
often by large margin. In this section, we extend their approach to hypergraphs and consider general
H (C,C)
balanced hypergraph cuts Bcut(C, C) of the form, Bcut(C, C) = cutS(C)
, where S? : 2V ? R+
?
?
?
For the normalized cut one has
is a non-negative, symmetric set function (that is S(C)
= S(C)).
4
?
?
S(C)
= vol(C) vol(C) whereas for the Cheeger cut one has S(C)
= min{vol C, vol C}. Other
examples of balancing functions can be found in [16]. Our following result shows that the balanced
hypergraph cut also has an exact relaxation into a continuous nonlinear eigenproblem [14].
Theorem 4.1. Let H = (V, E, w) be a finite, weighted hypergraph and S : R|V | ? R be the Lovasz
extension of the symmetric, non-negative set function S? : 2V ? R. Then, it holds that
P
fi ? min fj
e?E we max
cutH (C, C)
i?e
j?e
min
= min
.
|V
|
?
C?V
S(f )
f ?R
S(C)
Further, let f ? R|V | and define Ct := {i ? V | fi > t}. Then,
P
fi ? min fj
e?E we max
cutH (Ct , Ct )
i?e
j?e
?
min
.
? t)
t?R
S(f )
S(C
The last part of the theorem shows that ?optimal thresholding? (turning f ? RV into a partition)
among all level sets of any f ? R|V | can only lead to a better or equal balanced hypergraph cut.
H (f )
The question remains how to minimize the ratio Q(f ) = TV
S(f ) . As discussed in [16], every
Lovasz extension S can be written as a difference of convex positively 1-homogeneous functions1
S = S1 ? S2 . Moreover, as shown in Prop. 2.2 the total variation TVH is convex. Thus, we have
to minimize a non-negative ratio of a convex and a difference of convex (d.c.) function. We employ
the RatioDCA algorithm [16] shown in Algorithm 1. The main part is the convex inner problem. In
Algorithm 1 RatioDCA ? Minimization of a non-negative ratio of 1-homogeneous d.c. functions
R (f )?R (f )
1: Objective: Q(f ) = S11 (f )?S22(f ) . Initialization: f 0 = random with
f 0
= 1, ?0 = Q(f 0 )
2: repeat
k
3:
s1 (f k ) ? ?S1 (f k), r2 (f k ) ?
?R2 (fk )
k+1
4:
f
= arg min R1 (u) ? u, r2 (f ) + ?k S2 (u) ? u, s1 (f k )
kuk2 ?1
?k+1 = (R1 (f k+1 ) ? R2 (f k+1 ))/(S1 (f k+1 ) ? S2 (f k+1 ))
|?k+1 ??k |
6: until
<
?k
7: Output: eigenvalue ?k+1 and eigenvector f k+1 .
5:
our case R1 = T VH , R2 = 0, and thus the inner problem reads
minkuk2 ?1 {TVH (u) + ?k S2 (u) ? u, s1 (f k ) }.
(4)
For simplicity we restrict ourselves to submodular balancing functions, in which case S is convex
and thus S2 = 0. For the general case, see [16]. Note that the balancing functions of ratio/normalized
cut and Cheeger cut are submodular. It turns out that the inner problem is very similar to the semisupervised learning formulation (3). The efficient solution of both problems is discussed next.
5
Algorithms for the Total Variation on Hypergraphs
The problem (3) we want to solve for semi-supervised learning and the inner problem (4) of RatioDCA have a common structure. They are the sum of two convex functions: one of them is the
novel regularizer ?H,p and the other is a data term denoted by G here, cf., Table 1. We propose
solving these problems using a primal-dual algorithm, denoted by PDHG, which was proposed in
[23, 24]. Its main idea is to iteratively solve for each convex term in the objective function a proximal
problem. The proximal map proxg w.r.t. a mapping g : Rn ? R is defined by
1
proxg (?
x) = arg min{ kx ? x
?k22 + g(x)}.
2
x?Rn
1
A function f : Rd ? R is positively 1-homogeneous if ?? > 0, f (?x) = ?f (x).
5
The key idea is that often proximal problems can be solved efficiently leading to fast convergence
of the overall algorithm. We see in Table 1 that for both G the proximal problems have an explicit
solution. However, note that smooth convex terms can also be directly exploited [25]. For ?H,p , we
distinguish two cases, p = 1 and p = 2. Detailed descriptions of the algorithms can be found in the
supplementary material.
G(f ) = 12 kf ? Y k22
prox? G(f ) (?
x) =
1
x
1+? (?
G(f ) = ?hs1 (f k ), f i + ?k?k2 ?1 (f )
+ ?Y )
prox? G(f ) (?
x) =
x
?+? s1 (f k )
max{1,k?
x+? s1 (f k )k2 }
Table 1: Data term and proximal map for SSL (3) (left) and the inner problem of RatioDCA (4)
(right).The indicator function is defined as ?k?k2 ?1 (x) = 0, if kxk2 ? 1 and +? otherwise.
PDHG algorithm for ?H,1 . Let me be the number of vertices in hyperedge e ? E. We write
X
??H,1 (f ) = F (Kf ) :=
(F(e,1) (Ke f ) + F(e,2) (Ke f )),
(5)
e?E
where the rows of the matrices Ke ? Rme ,n are the i-th standard unit vectors for i ? e and the
functionals F(e,j) : Rme ? R are defined as
F(e,1) (?(e,1) ) = ?we max(?(e,1) ),
F(e,2) (?(e,2) ) = ??we min(?(e,2) ).
In contrast to the function G, we need in the PDHG algorithm the proximal maps for the conjugate
functions of F(e,j) . They are given by
?
F(e,1)
= ?S?we ,
?
F(e,2)
= ??S?we ,
Pme
me
where S?we = {x ? Rme :
. The solutions
i=1 xi = ?we , xi ? 0} is the scaled simplex in R
?
?
of the proximal problem for F(e,1)
and F(e,1)
are the orthogonal projections onto the simplexes
e
e
written here as PS?w
and P?S?w
, respectively. These projections can be done in linear time [26].
e
e
With the proximal maps we have presented so far, the PDHG algorithm has the following form.
Algorithm 2 PDHG for ?H,1
1: Initialization: f (0) = f?(0) = 0, ? ? [0, 1], ?, ? > 0 with ?? < 1/(2 maxi=1,...,n {ci })
2: repeat
3:
4:
?(e,1)
?
(k+1)
(e,2) (k+1)
(k+1)
e
= PS?w
(?(e,1)
e
e
= P?S?w
(?
e
(k)
+ ?Ke f?(k) ), e ? E
+ ?Ke f?(k) ), e ? E
(e,2) (k)
5:
f
= prox? G (f (k) ? ? e?E KeT (?(e,1)
6:
f?(k+1) = f (k+1) + ?(f (k+1) ? f (k) )
7: until relative duality gap <
8: Output: f (k+1) .
P
(k+1)
+ ?(e,2)
(k+1)
))
P
The value ci =
e?E Hi,e is the number of hyperedges the vertex i lies in. It is important to
point out here that the algorithm decouples the problem in the sense that in every iteration we
solve subproblems which treat the functionals G, F(e,1) , F(e,2) separately and thus can be solved
efficiently.
PDHG algorithm for ?H,2 . We define the matrices Ke as above. Moreover, we introduce for
every hyperedge e ? E the functional
Fe (?e ) = ?we (max(?e ) ? min(?e ))2 .
(6)
P
Hence, we can write ?H,2 (f ) = e?E Fe (Ke f ). As we show in the supplementary material, the
conjugate functions Fe? are not indicator functions and we thus solve the corresponding proximal
problems via proximal problems for Fe . More specifically, we exploit the fact that
prox?Fe? (?
?e ) = ?
? e ? prox ?1 Fe (?
?e ),
(7)
and use the following novel result concerning the proximal problem on the right-hand side of (7).
6
Prop. \ Dataset
Number of classes
|V |
|E|
P
e?E |e|
|E| of Clique Exp.
Zoo
7
101
42
1717
10201
Mushrooms
2
8124
112
170604
65999376
Covertype (4,5)
2
12240
104
146880
143008092
Covertype (6,7)
2
37877
123
454522
1348219153
20Newsgroups
4
16242
100
65451
53284642
Table 2: Datasets used for SSL and clustering. Note that the clique expansion leads for all datasets
to a graph which is close to being fully connected as all datasets contain large hyperedges. For
covertype (6,7) the weight matrix needs over 10GB of memory, the original hypergraph only 4MB.
Proposition 5.1. For any ? > 0 and any ?
? e ? Rme the proximal map
1
1
prox ?1 Fe (?
?e ) = arg min{ k?e ? ?
? e k22 + ?we (max(?e ) ? min(?e ))2 }
?
?e ?Rme 2
can be computed with O(me log me ) arithmetic operations.
A corresponding algorithm which is new to the best of our knowledge is provided in the supplementary material. We note here that the complexity is due to the fact that we sort the input vector ?
?e.
The PDHG algorithm for p = 2 is provided in the supplementary material. It has the same structure
as Algorithm 2 with the only difference that we now solve (7) for every hyperedge.
6
Experiments
The method of Zhou et al [11] seems to be the standard algorithm for clustering and SSL on hypergraphs. We compare to them on a selection of UCI datasets summarized in Table 2. Zoo, Mushrooms
and 20Newsgroups2 have been used also in [11] and contain only categorical features. As in [11],
a hyperedge of weight one is created by all data points which have the same value of a categorical
feature. For covertype we quantize the numerical features into 10 bins of equal size. Two datasets
are created each with two classes (4,5 and 6,7) of the original dataset.
Semi-supervised Learning (SSL). In [11], they suggest using a regularizer induced by the normalized Laplacian LCE arising from the clique expansion
?1
?1
LCE = I ? DCE2 HW 0 H T DCE2 ,
P
we
0
|E|?|E|
where DCE is a diagonal matrix with entries dEC (i) =
is a
e?E Hi,e |e| and W ? R
0
diagonal matrix with entries w (e) = we /|e|. The SSL problem can then be formulated as
? > 0,
2
arg minf ?R|V | {kf ? Y k2 + ? hf, LCE f i}.
The advantage of this formulation is that the solution can be found via a linear system. However, as
Table 2 indicates the obvious downside is that LCE is a potentially very dense matrix and thus one
2
3
needs in the worst
P case |V | memory and O(|V | ) computations. This is in contrast to our method
which needs 2 e?E |e| + |V | memory. For the largest example (covertype 6,7), where the clique
expansion fails due to memory problems, our method takes 30-100s (depending on ?). We stop our
method for all experiments when we achieve a relative duality gap of 10?6 . In the experiments we
do 10 trials for different numbers of labeled points. The reg. parameter ? is chosen for both methods
from the set 10?k , where k = {0, 1, 2, 3, 4, 5, 6} via 5-fold cross validation. The resulting errors
and standard deviations can be found in the following table(first row lists the no. of labeled points).
Our SSL methods based on ?H,p , p = 1, 2 outperform consistently the clique expansion technique
of Zhou et al [11] on all datasets except 20newsgroups3 . However, 20newsgroups is a very difficult
dataset as only 10,267 out of the 16,242 data points are different which leads to a minimum possible
error of 9.6%. A method based on pairwise interaction such as the clique expansion can better deal
2
This is a modified version by Sam Roweis of the original 20 newsgroups dataset available at http:
//www.cs.nyu.edu/?roweis/data/20news_w100.mat.
3
Communications with the authors of [11] could not clarify the difference to their results on 20newsgroups
7
with such label noise as the large hyperedges for this dataset accumulate the label noise. On all
other datasets we observe that incorporating hypergraph structure leads to much better results. As
expected our squared TV functional (p = 2) outperforms slightly the total variation (p = 1) even
though the difference is small. Thus, as ?H,2 reduces to the standard regularization based on the
graph Laplacian, which is known to work well, we recommend ?H,2 for SSL on hypergraphs.
Zoo
Zhou et al.
?H,1
?H,2
Mushr.
Zhou et al.
?H,1
?H,2
covert45
Zhou et al.
?H,1
?H,2
covert67
?H,1
?H,2
20news
Zhou et al.
?H,1
?H,2
20
35.1 ? 17.2
2.9 ? 3.0
2.3 ? 1.9
25
30.3 ? 7.9
1.4 ? 2.2
1.5 ? 2.4
20
15.5 ? 12.8
19.5 ? 10.5
18.4 ? 7.4
20
18.9 ? 4.6
21.4 ? 0.9
20.7 ? 2.0
20
40.6 ? 8.9
25.2 ? 18.3
20
45.5 ? 7.5
65.7 ? 6.1
55.0 ? 4.8
40
10.9?4.4
10.8?3.7
9.8 ? 4.5
40
18.3?5.2
17.6?2.6
16.1 ? 4.1
40
6.4?10.4
4.3 ? 9.6
40
34.4 ? 3.1
61.4?6.1
48.0?6.0
30
40.7 ? 14.2
2.2 ? 2.1
2.9 ? 2.3
60
9.5 ? 2.7
7.4 ? 3.8
9.9 ? 5.5
60
17.2?6.7
12.6?4.3
10.9 ? 4.9
60
3.6 ? 3.2
2.1 ? 2.0
60
31.5 ? 1.4
53.2?5.7
45.0?5.9
35
29.7 ? 8.8
0.7 ? 1.0
0.9 ? 1.4
80
10.3?2.0
5.6 ? 1.9
6.4 ? 2.7
80
16.6?6.4
7.6 ? 3.5
5.9 ? 3.7
80
3.3 ? 2.5
2.2 ? 1.4
80
29.8 ? 4.0
46.2?3.7
40.3?3.0
40
32.9 ? 16.8
0.7 ? 1.5
0.8 ? 1.7
100
9.0 ? 4.5
5.7 ? 2.2
6.3 ? 2.5
100
17.6?5.2
6.2 ? 3.8
4.6 ? 3.4
100
1.8 ? 0.8
1.4 ? 1.1
100
27.0 ? 1.3
42.4?3.3
38.3?2.7
45
27.6 ? 10.8
0.9 ? 1.4
1.2 ? 1.8
120
8.8 ? 1.4
5.4 ? 2.4
4.5 ? 1.8
120
18.4?5.1
4.5 ? 3.6
3.3 ? 3.1
120
1.3 ? 0.9
1.0 ? 0.8
120
27.3 ? 1.5
40.9?3.2
38.1?2.6
50
25.3 ? 14.4
1.9 ? 3.0
1.6 ? 2.9
160
8.8 ? 2.3
4.9 ? 3.8
4.4 ? 2.1
160
19.2?4.0
2.6 ? 1.6
2.2 ? 1.8
160
0.9 ? 0.4
0.7 ? 0.4
160
25.7 ? 1.4
36.1?1.5
35.0?2.8
200
9.3 ? 1.0
5.6 ? 3.8
3.0 ? 0.6
200
20.4?2.9
1.5 ? 1.3
1.0 ? 1.1
200
1.2 ? 0.9
1.1 ? 0.8
200
25.0 ? 1.3
34.7?3.6
34.1?2.0
Test error and standard deviation of the SSL methods over 10 runs for varying number of labeled
points.
Clustering. We use the normalized hypergraph cut as clustering objective. For more than two
clusters we recursively partition the hypergraph until the desired number of clusters is reached.
For comparison we use the normalized spectral clustering approach based on the Laplacian LCE
[11](clique expansion). The first part (first 6 columns) of the following table shows the clustering
errors (majority vote on each cluster) of both methods as well as the normalized cuts achieved by
these methods on the hypergraph and on the graph resulting from the clique expansion. Moreover,
we show results (last 4 columns) which are obtained based on a kNN graph (unit weights) which
is built based on the Hamming distance (note that we have categorical features) in order to check if
the hypergraph modeling of the problem is actually useful compared to a standard similarity based
graph construction. The number k is chosen as the smallest number for which the graph becomes
connected and we compare results of normalized 1-spectral clustering [14] and the standard spectral
clustering [22]. Note that the employed hypergraph construction has no free parameter.
Dataset
Mushrooms
Zoo
20-newsgroup
covertype (4,5)
covertype (6,7)
Clustering Error %
Ours
[11]
10.98
32.25
16.83
15.84
47.77
33.20
22.44
22.44
8.16
-
Hypergraph Ncut
Ours
[11]
0.0011
0.0013
0.6739
0.6784
0.0176
0.0303
0.0018
0.0022
8.18e-4
-
Graph(CE) Ncut
Ours
[11]
0.6991
0.7053
5.1315
5.1703
2.3846
1.8492
0.7400
0.6691
0.6882
-
Clustering Error %
[14]
[22]
48.2
48.2
5.94
5.94
66.38
66.38
22.44
22.44
45.85
45.85
kNN-Graph Ncut
[14]
[22]
1e-4
1e-4
1.636
1.636
0.1031
0.1034
0.0152
0.02182
0.0041
0.0041
First, we observe that our approach optimizing the normalized hypergraph cut yields better or similar
results in terms of clustering errors compared to the clique expansion (except for 20-newsgroup for
the same reason given in the previous paragraph). The improvement is significant in case of Mushrooms while for Zoo our clustering error is slightly higher. However, we always achieve smaller
normalized hypergraph cuts. Moreover, our method sometimes has even smaller cuts on the graphs
resulting from the clique expansion, although it does not directly optimize this objective in contrast
to [11]. Again, we could not run the method of [11] on covertype (6,7) since the weight matrix is
very dense. Second, the comparison to a standard graph-based approach where the similarity structure is obtained using the Hamming distance on the categorical features shows that using hypergraph
structure is indeed useful. Nevertheless, we think that there is room for improvement regarding the
construction of the hypergraph which is a topic for future research.
Acknowledgments
M.H. would like to acknowledge support by the ERC Starting Grant NOLEPRO and L.J. acknowledges support by the DFG SPP-1324.
8
References
[1] Y. Huang, Q. Liu, and D. Metaxas. Video object segmentation by hypergraph cut. In CVPR, pages 1738
? 1745, 2009.
[2] P. Ochs and T. Brox. Higher order motion models and spectral clustering. In CVPR, pages 614?621,
2012.
[3] S. Klamt, U.-U. Haus, and F. Theis. Hypergraphs and cellular networks. PLoS Computational Biology,
5:e1000385, 2009.
[4] Z. Tian, T. Hwang, and R. Kuang. A hypergraph-based learning algorithm for classifying gene expression
and arraycgh data with prior knowledge. Bioinformatics, 25:2831?2838, 2009.
[5] D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: an approach based on dynamical
systems. VLDB Journal, 8:222?236, 2000.
[6] J. Bu, S. Tan, C. Chen, C. Wang, H. Wu, L. Zhang, and X. He. Music recommendation by unified hypergraph: Combining social media information and music content. In Proc. of the Int. Conf. on Multimedia
(MM), pages 391?400, 2010.
[7] A. Shashua, R. Zass, and T. Hazan. Multi-way clustering using super-symmetric non-negative tensor
factorization. In ECCV, pages 595?608, 2006.
[8] S. Rota Bulo and M. Pellilo. A game-theoretic approach to hypergraph clustering. In NIPS, pages 1571?
1579, 2009.
[9] M. Leordeanu and C. Sminchisescu. Efficient hypergraph clustering. In AISTATS, pages 676?684, 2012.
[10] S. Agarwal, J. Lim, L. Zelnik-Manor, P. Petrona, D. J. Kriegman, and S. Belongie. Beyond pairwise
clustering. In CVPR, pages 838?845, 2005.
[11] D. Zhou, J. Huang, and B. Sch?olkopf. Learning with hypergraphs: Clustering, classification, and embedding. In NIPS, pages 1601?1608, 2006.
[12] S. Agarwal, K. Branson, and S. Belongie. Higher order learning with graphs. In ICML, pages 17?24,
2006.
[13] E. Ihler, D. Wagner, and F. Wagner. Modeling hypergraphs by graphs with the same mincut properties.
Information Processing Letters, 45:171?175, 1993.
[14] M. Hein and T. B?uhler. An inverse power method for nonlinear eigenproblems with applications in 1spectral clustering and sparse PCA. In NIPS, pages 847?855, 2010.
[15] A. Szlam and X. Bresson. Total variation and Cheeger cuts. In ICML, pages 1039?1046, 2010.
[16] M. Hein and S. Setzer. Beyond spectral clustering - tight relaxations of balanced graph cuts. In NIPS,
pages 2366?2374, 2011.
[17] T. B?uhler, S. Rangapuram, S. Setzer, and M. Hein. Constrained fractional set programs and their application in local clustering and community detection. In ICML, pages 624?632, 2013.
[18] F. Bach. Learning with submodular functions: A convex optimization perspective. CoRR, abs/1111.6453,
2011.
[19] M. Belkin and P. Niyogi. Semi-supervised learning on manifolds. Machine Learning, 56:209?239, 2004.
[20] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
In NIPS, volume 16, pages 321?328, 2004.
[21] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Patt. Anal. Mach. Intell.,
22(8):888?905, 2000.
[22] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395?416, 2007.
[23] E. Esser, X. Zhang, and T. F. Chan. A general framework for a class of first order primal-dual algorithms
for convex optimization in imaging science. SIAM Journal on Imaging Sciences, 3(4):1015?1046, 2010.
[24] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to
imaging. J. of Math. Imaging and Vision, 40:120?145, 2011.
[25] L. Condat. A primaldual splitting method for convex optimization involving lipschitzian, proximable and
linear composite terms. J. Optimization Theory and Applications, 158(2):460?479, 2013.
[26] K. Kiwiel. On Linear-Time algorithms for the continuous quadratic knapsack problem. J. Opt. Theory
Appl., 134(3):549?554, 2007.
9
| 4914 |@word trial:1 version:1 norm:1 seems:1 vldb:1 zelnik:1 recursively:1 carry:3 liu:1 contains:4 ours:3 outperforms:1 existing:2 current:2 incidence:1 mushroom:4 written:4 fn:1 numerical:1 partition:8 v:1 prohibitive:1 math:1 revisited:1 node:1 c6:4 zhang:2 lce:5 saarland:1 c2:1 paragraph:1 kiwiel:1 introduce:3 pairwise:4 indeed:1 expected:1 multi:1 globally:1 cardinality:1 becomes:1 provided:2 moreover:4 medium:1 tvg:1 eigenvector:1 developed:2 unified:1 every:8 exactly:2 decouples:1 rm:1 k2:5 scaled:1 unit:2 grant:1 omit:1 szlam:1 producing:1 local:2 treat:1 pock:1 mach:1 initialization:2 appl:1 branson:1 limited:1 factorization:1 tian:1 acknowledgment:1 practice:1 area:1 gibson:1 w4:1 composite:1 projection:2 rota:1 suggest:2 get:1 onto:1 close:1 selection:1 operator:1 impossible:1 restriction:1 equivalent:1 map:5 www:1 optimize:1 shi:1 starting:1 convex:19 ke:7 simplicity:1 splitting:3 immediately:1 embedding:1 handle:1 variation:20 resp:2 construction:3 suppose:1 tan:1 exact:4 homogeneous:3 us:3 element:4 ochs:1 cut:67 labeled:6 rangapuram:2 solved:4 wang:1 worst:1 connected:4 news:1 plo:1 balanced:9 cheeger:3 transforming:1 complexity:3 hypergraph:59 kriegman:1 tight:3 solving:2 f2:1 basis:1 regularizer:3 fast:1 quite:3 supplementary:5 solve:5 cvpr:3 otherwise:2 favor:1 niyogi:1 knn:2 statistic:1 think:1 advantage:1 eigenvalue:1 matthias:1 net:1 propose:3 interaction:1 mb:1 uci:1 combining:1 subgraph:2 achieve:2 roweis:2 description:1 moved:1 inducing:2 olkopf:2 convergence:1 cluster:3 p:2 r1:3 produce:1 object:1 help:1 derive:1 illustrate:1 stating:1 depending:1 ij:1 c:1 come:1 raghavan:1 material:5 bin:1 f1:2 bcut:2 proposition:3 opt:1 mathematically:1 extension:12 clarify:1 hold:2 mm:1 considered:1 exp:1 proxg:2 mapping:3 smallest:2 proc:1 applicable:1 label:5 hs1:1 largest:2 tool:1 weighted:7 nolepro:1 minimization:1 lovasz:9 clearly:1 always:2 aim:2 modified:1 manor:1 super:1 zhou:8 hj:1 varying:1 encode:2 derived:1 focus:1 improvement:2 consistently:1 indicates:1 check:1 contrast:5 sense:1 dependent:1 ratiodca:4 typically:1 relation:4 wij:3 arg:5 among:1 flexible:1 dual:3 denoted:2 overall:1 socalled:1 retaining:1 classification:1 constrained:1 special:1 ssl:9 brox:1 equal:4 construct:1 eigenproblem:3 functions1:1 biology:1 s22:1 icml:3 minf:1 future:1 simplex:1 recommend:2 employ:1 belkin:1 interpolate:1 intell:1 dfg:1 replaced:1 ourselves:1 ab:1 detection:1 uhler:2 w5:1 primal:3 edge:4 orthogonal:1 desired:1 hein:4 column:2 modeling:4 earlier:1 downside:1 bresson:1 vertex:13 subset:1 entry:2 deviation:2 uniform:6 kuang:1 proximal:13 siam:1 bu:1 off:3 together:1 w1:1 squared:3 again:1 von:1 huang:2 ket:1 conf:1 leading:1 prox:6 de:2 star:5 summarized:1 int:1 explicitly:2 depends:1 view:1 analyze:1 hazan:1 reached:1 shashua:1 sort:1 hf:1 simon:1 minimize:2 characteristic:2 efficiently:4 yield:2 proximable:1 sundar:1 famous:1 metaxas:1 emphasizes:1 zoo:5 definition:4 against:1 simplexes:1 obvious:1 naturally:1 proof:1 di:1 ihler:2 hamming:2 stop:1 dataset:6 recall:2 knowledge:2 dce:1 lim:1 fractional:1 segmentation:3 actually:1 higher:8 attained:2 supervised:9 follow:1 formulation:3 done:1 though:1 chambolle:1 just:1 spiky:2 until:3 hand:1 nonlinear:5 overlapping:1 defines:1 hwang:1 semisupervised:2 effect:2 k22:3 normalized:16 contain:2 regularization:14 hence:2 read:1 symmetric:3 iteratively:1 deal:3 undesired:1 game:1 criterion:2 theoretic:1 motion:1 fj:8 image:2 novel:3 fi:12 common:1 behaves:1 functional:9 volume:1 belong:1 hypergraphs:34 discussed:4 extend:1 he:1 accumulate:1 significant:1 smoothness:1 rd:1 fk:1 consistency:1 erc:1 submodular:4 esser:1 similarity:2 recent:2 chan:1 perspective:1 optimizing:1 hyperedge:19 s11:1 yi:2 exploited:1 seen:2 minimum:6 additional:1 impose:1 employed:1 recognized:1 arithmetic:1 semi:9 smoother:1 multiple:1 rv:1 reduces:3 smooth:1 technical:1 cross:1 bach:1 retrieval:1 divided:1 concerning:1 zass:1 laplacian:7 jost:1 scalable:1 basic:1 involving:1 vision:2 iteration:1 sometimes:1 agarwal:2 achieved:2 dec:1 penalize:1 c1:1 whereas:3 want:2 separately:1 interval:1 else:1 hyperedges:8 sch:2 biased:1 w2:1 induced:2 undirected:2 split:1 newsgroups:4 gave:1 w3:1 perfectly:1 restrict:1 inner:5 idea:3 regarding:1 expression:1 pca:1 gb:1 setzer:3 useful:2 detailed:1 involve:1 eigenvectors:1 eigenproblems:2 category:1 http:1 outperform:2 tutorial:1 arising:1 patt:1 write:3 mat:1 vol:6 group:3 key:5 nevertheless:1 bulo:1 ce:2 imaging:4 graph:55 relaxation:9 sum:2 run:2 inverse:1 letter:2 luxburg:1 extends:1 family:4 almost:1 reasonable:1 wu:1 hi:4 ct:3 distinguish:1 correspondence:1 fold:1 quadratic:1 covertype:8 leonardo:1 encodes:1 rme:5 kleinberg:1 bousquet:1 argument:1 min:15 optimality:1 department:1 tv:2 conjugate:2 beneficial:1 smaller:4 slightly:2 sam:1 appealing:1 s1:8 remains:1 discus:3 loose:2 turn:1 available:2 operation:1 observe:2 spectral:11 rp:1 knapsack:1 original:3 clustering:29 cf:1 mincut:1 lipschitzian:1 music:2 exploit:1 approximating:1 tensor:4 objective:4 malik:1 question:1 diagonal:2 gradient:1 distance:2 majority:1 me:4 topic:1 manifold:1 cellular:1 reason:1 enforcing:2 haus:1 relationship:3 ratio:5 difficult:1 fe:7 potentially:3 kde:1 subproblems:1 negative:6 anal:1 datasets:7 finite:1 acknowledge:1 defining:1 communication:1 rn:3 arbitrary:1 community:1 introduced:2 complement:1 namely:1 lal:1 established:1 nip:5 trans:1 able:1 beyond:2 dynamical:1 spp:1 sparsity:1 summarize:1 program:1 built:1 max:9 memory:4 video:1 pdhg:7 power:1 natural:1 turning:1 indicator:2 improve:1 created:2 acknowledges:1 categorical:5 vh:1 prior:1 kf:4 theis:1 relative:2 fully:6 loss:4 contracting:1 mixed:1 interesting:1 limitation:1 proven:1 analogy:2 ingredient:1 vg:1 validation:1 degree:1 thresholding:1 classifying:1 rcut:1 balancing:4 row:2 eccv:1 repeat:2 last:3 free:1 bias:2 allow:3 side:1 wagner:2 sparse:2 overcome:1 author:1 far:1 social:1 functionals:9 approximate:1 emphasize:1 implicitly:2 cutting:2 gene:1 clique:21 global:2 belongie:2 xi:2 continuous:2 table:8 elastic:1 sminchisescu:1 expansion:21 quantize:1 aistats:1 main:5 dense:2 motivation:1 s2:5 noise:2 condat:1 positively:2 slow:1 fails:1 explicit:1 lie:1 kxk2:1 hw:1 down:1 theorem:2 kuk2:1 maxi:1 r2:5 list:1 nyu:1 exists:2 incorporating:2 corr:1 ci:3 margin:2 kx:1 gap:2 chen:1 ncut:4 ordered:1 recommendation:1 leordeanu:1 corresponds:2 primaldual:1 prop:2 weston:1 formulated:3 loosing:1 towards:3 room:1 price:1 content:1 specifically:1 except:2 lemma:1 pme:1 total:20 multimedia:1 duality:2 experimental:1 vote:1 newsgroup:2 support:2 syama:1 unbalanced:2 bioinformatics:2 incorporate:1 reg:1 |
4,326 | 4,915 | Using multiple samples to learn mixture
models
Jason Lee?
Stanford University
Stanford, USA
[email protected]
Ran Gilad-Bachrach
Microsoft Research
Redmond, USA
[email protected]
Rich Caruana
Microsoft Research
Redmond, USA
[email protected]
Abstract
In the mixture models problem it is assumed that there are K distributions
?1 , . . . , ?K and one gets to observe a sample from a mixture of these distributions with unknown coefficients. The goal is to associate instances with
their generating distributions, or to identify the parameters of the hidden
distributions. In this work we make the assumption that we have access to
several samples drawn from the same K underlying distributions, but with
different mixing weights. As with topic modeling, having multiple samples
is often a reasonable assumption. Instead of pooling the data into one sample, we prove that it is possible to use the differences between the samples
to better recover the underlying structure. We present algorithms that recover the underlying structure under milder assumptions than the current
state of art when either the dimensionality or the separation is high. The
methods, when applied to topic modeling, allow generalization to words not
present in the training data.
1
Introduction
The mixture model has been studied extensively from several directions. In one setting it
is assumed that there is a single sample, that is a single collection of instances, from which
one has to recover the hidden information. A line of studies on clustering theory, starting
from [5] has proposed to address this problem by finding a projection to a low dimensional
space and solving the problem in this space. The goal of this projection is to reduce the
dimension while preserving the distances, as much as possible, between the means of the
underlying distributions. We will refer to this line as MM (Mixture Models). On the other
end of the spectrum, Topic modeling (TM), [9, 3], assumes multiple samples (documents)
that are mixtures, with different weights of the underlying distributions (topics) over words.
Comparing the two lines presented above shows some similarities and some differences. Both
models assume the same generative structure: a point (word) is generated by first choosing
the distribution ?i using the mixing weights and then selecting a point (word) according to
this distribution. The goal of both models is to recover information about the generative
model (see [10] for more on that). However, there are some key differences:
(a) In MM, there exists a single sample to learn from. In TM, each document is a mixture
of the topics, but with different mixture weights.
(b) In MM, the points are represented as feature vectors while in TM the data is represented
as a word-document co-occurrence matrix. As a consequence, the model generated by
TM cannot assign words that did not previously appear in any document to topics.
?
Work done while the author was an intern at Microsoft Resaerch
1
(c) TM assumes high density of the samples, i.e., that the each word appears multiple times.
However, if the topics were not discrete distributions, as is mostly the case in MM, each
"word" (i.e., value) would typically appear either zero or one time, which makes the
co-occurrence matrix useless.
In this work we try to close the gap between MM and TM. Similar to TM, we assume that
multiple samples are available. However, we assume that points (words) are presented as
feature vectors and the hidden distributions may be continuous. This allows us to solve
problems that are typically hard in the MM model with greater ease and generate models
that generalize to points not in the training data which is something that TM cannot do.
1.1
Definitions and Notations
We assume a mixture model in which there are K mixture components ?1 , . . . , ?K defined
over the space X. These mixture components are probability measures over X. We assume
that there are M mixture models (samples), each drawn with different mixture weights
?1 , . . . , ?M such that ?j = (?j1 , . . . , ?jK ) where all the weights are non-negative and sum to
1. Therefore, we have M different probability measures D1 , . .P
. , DM defined over X such
that for a measurable set A and j = 1, . . . , M we have Dj (A) = i ?ji ?i (A). We will denote
by ?min the minimal value of ?ji .
In the first part of this work, we will provide an algorithm that given samples S1 , . . . , SM
from the mixtures D1 , . . . , DM finds a low-dimensional embedding that preserves the distances between the means of each mixture.
In the second part of this work we will assume that the mixture components have disjoint
supports. Hence we will assume that X = ?j Cj such that the Cj ?s are disjoint and for
every j, ?j (Cj ) = 1. Given samples S1 , . . . , SM , we will provide an algorithm that finds the
supports of the underlying distributions, and thus clusters the samples.
1.2
Examples
Before we dive further in the discussion of our methods and how they compare to prior art,
we would like to point out that the model we assume is realistic in many cases. Consider the
following example: assume that one would like to cluster medical records to identify subtypes of diseases (e.g., different types of heart disease). In the classical clustering setting
(MM), one would take a sample of patients and try to divide them based on some similarity
criteria into groups. However, in many cases, one has access to data from different hospitals
in different geographical locations. The communities being served by the different hospitals
may be different in socioeconomic status, demographics, genetic backgrounds, and exposure
to climate and environmental hazards. Therefore, different disease sub-types are likely to
appear in different ratios in the different hospital. However, if patients in two hospitals
acquired the same sub-type of a disease, parts of their medical records will be similar.
Another example is object classification in images. Given an image, one may break it to
patches, say of size 10x10 pixels. These patches may have different distributions based on the
object in that part of the image. Therefore, patches from images taken at different locations
will have different representation of the underlying distributions. Moreover, patches from
the center of the frame are more likely to contain parts of faces than patches from the
perimeter of the picture. At the same time, patches from the bottom of the picture are
more likely to be of grass than patches from the top of the picture.
In the first part of this work we discuss the problem of identifying the mixture component
from multiple samples when the means of the different components differ and variances are
bounded. We focus on the problem of finding a low dimensional embedding of the data that
preserves the distances between the means since the problem of finding the mixtures in a
low dimensional space has already been address (see, for example [10]). Next, we address a
different case in which we assume that the support of the hidden distributions is disjoint.
We show that in this case we can identify the supports of each distribution. Finally we
demonstrate our approaches on toy problems. The proofs of the theorems and lemmas
2
appear in the appendix. Table 1 summarizes the applicability of the algorithms presented
here to the different scenarios.
1.3
Comparison to prior art
There are two common approaches in the
theoretical study of the MM model. The
Disjoint
Overlapping
method of moments [6, 8, 1] allows the reclusters
clusters
covery of the model but requires exponenHigh
DSC, MSP
MSP
tial running time and sample sizes. The
dimension
other approach, to which we compare our
Low
results, uses a two stage approach. In the
DSC
dimension
first stage, the data is projected to a low
dimensional space and in the second stage
Table 1: Summary of the scenarios the MSP
the association of points to clusters is recov(Multi Sample Projection) algorithm and the
ered. Most of the results with this approach
DSC (Double Sample Clustering) algorithm
assume that the mixture components are
are designed to address.
Gaussians. Dasgupta [5], in a seminal paper, presented the first result in this line.
He used random projections to project the points?to a space of a lower dimension. This
work assumes that separation is at least ?(?max n). This result has been improved in
a series of papers. Arora and Kannan [10] presented algorithms for finding the mixture
components which are, in most cases, polynomial in n and K. Vempala and
Wang [11]
1/4
1/4
n
used PCA to reduce the required separation to ? ?max K log
/?min . They use
PCA to project on the first K principal components, however, they require the Gaussians
to be spherical. Kanan, Salmasian and Vempala [7] used similar spectral methods but
2/3
were able to improve the results to require separation of only c?max K /?2min . Chaudhuri [4] have suggested using correlations and independence between features under the
assumption that the means of the Gaussians differ
on many features. They require sepa
p
ration of ? ?max K log(K?max log n/?min ) , however they assume that the Gaussians
are axis aligned and that the distance between the centers of the Gaussians is spread across
? (K?max log n/?min ) coordinates.
We present a method to project the problem into a space of dimension d? which is the
dimension of the affine space spanned by the means of the distributions. We can find
this projection and maintain the distances between the means to within a factor of 1 ? .
The different factors, ?max , n and will affect the sample size needed, but do not make the
problem impossible. This can be used as a preprocessing step for any of the results discussed
above.
For example,
with [5] yields an algorithm that requires a separation of only
combining
?
?
?
? ?max d ? ? ?max K . However, using [11] will result in separation requirement
p
of ? ?max K log (K?max log d? /?min ) . There is also an improvement in terms of the
value of ?max since we need only to control the variance in the affine space spanned by the
means of the Gaussians and do not need to restrict the variance in orthogonal directions,
as long as it is finite. Later we also show that we can work in a more generic setting
where the distributions are not restricted to be Gaussians as long as the supports of the
distributions are disjoint. While the disjoint assumption may seem too strict, we note that
the results presented above make very similar assumptions. For example, even if the required
separation is ?max K 1/2 then if we look at the Voronoi tessellation around the centers of the
?1
Gaussians, each cell will contain at least 1 ? (2?) K 3/4 exp (?K/2) of the mass of the
Gaussian. Therefore, when K is large, the supports of the Gaussians are almost disjoint.
2
Projection for overlapping components
In this section we present a method to use multiple samples to project high dimensional
mixtures to a low dimensional space while keeping the means of the mixture components
3
Algorithm 1 Multi Sample Projection (MSP)
Inputs:
Samples S1 , . . . , Sm from mixtures D1 , . . . , Dm
Outputs:
Vectors v?1 , . . . , v?m?1 which span the projected space
Algorithm:
?j be the mean of the sample Sj
1. For j = 1, . . . , m let E
?j ? E
?j+1
2. For j = 1, . . . , m ? 1 let v?j = E
3. return v?1 , . . . , v?m?1
well separated. The main idea behind the Multi Sample Projection (MSP) algorithm is
simple. Let ?i be the mean of the i?th component ?i and let Ej be the mean of the j?th
mixture Dj . From the nature of the mixture, Ej is in the convex-hull of ?1 , . . . , ?K and
hence in the affine space spanned by them; this is demonstrated in Figure 1. Under mild
assumptions, if we have sufficiently many mixtures, their means will span the affine space
spanned by ?1 , . . . , ?K . Therefore, the MSP algorithm estimates the Ej ?s and projects to
the affine space they span. The reason for selecting this sub-space is that by projecting on
this space we maintain the distance between the means while reducing the dimension to at
most K ? 1. The MSP algorithm is presented in Algorithm 1. In the following theorem we
prove the main properties of the MSP algorithm. We will assume that X = Rn , the first two
2
denotes maximal variance of any of the components in
moments of ?j are finite, and ?max
any direction. The separation of the mixture components is minj6=j 0 k?j ? ?j 0 k. Finally, we
will denote by d? the dimension of the affine space spanned by the ?j ?s. Hence, d? ? K ? 1.
Theorem 1. MSP Analysis
Let Ej = E [Dj ] and let vj = Ej ? Ej+1 . Let Nj = |Sj |. The following holds for MSP:
1. The computational complexity of the MSP algorithm is n
where n is the original dimension of the problem.
?j
> ?
2. For any > 0, Pr supj
Ej ? E
2
n?max
2
1
j Nj
P
PM
j=1
Nj + 2n (m ? 1)
.
3. Let ?
?i be the projection of ?i on the space spanned
Pby v?1 , . . . , v?M ?1 and assume
P that
?i, ?i ? span {vj }. Let ?ji be such that ?i = j ?ji vj and let A = maxi ?ji
P 1
n? 2
then with probability of at least 1 ? max
2
j Nj
2
4n?max
A2 X 1
0
0
Pr max
|k?
?
?
k
?
k?
?
?
?
?
k|
>
?
.
i
i
i
i
i,i0
2
Nj
j
The MSP analysis theorem shows that with
large enough samples, the projection will
maintain the separation between the centers
of the distributions. Moreover, since this is
a projection, the variance in any direction
cannot increase. The value of A measures
the complexity of the setting. If the mixing
coefficients are very different in the different samples then A will be small. However,
if the mixing coefficients are very similar, a
larger sample is required. Nevertheless, the
size of the sample needed is polynomial in Figure 1: The mean of the mixture compothe parameters of the problem. It is also nents will be in the convex hull of their means
apparent that with large enough samples,
demonstrated here by the red line.
a good projection will be found, even with
4
large variances, high dimensions and close
centroids.
A nice property of the bounds presented here is that they assume only bounded first and
second moments. Once a projection to a low dimensional space has been found, it is possible
to find the clusters using approaches presented in section 1.3. However, the analysis of the
MSP algorithm assumes that the means of E1 , . . . , EM span the affine space spanned by
?1 , . . . , ?K . Clearly, this implies that we require that m > d? . However, when m is much
larger than d? , we might end-up with a projection on too large a space. This could easily
?1 , . . . , E
?m will be almost co-planar in the sense that there will
be fixed since in this case, E
?
be an affine space of dimension d that is very close to all these points and we can project
onto this space.
3
Disjoint supports and the Double Sample Clustering (DSC)
algorithm
In this section we discuss the case where the underlying distributions have disjoint supports.
In this case, we do not make any assumption about the distributions. For example, we do
not require finite moments. However, as in the mixture of Gaussians case some sort of
separation between the distributions is needed, this is the role of the disjoint supports.
We will show that given two samples from mixtures with different mixture coefficients, it
is possible to find the supports of the underlying distributions (clusters) by building a tree
of classifiers such that each leaf represents a cluster. The tree is constructed in a greedy
fashion. First we take the two samples, from the two distributions, and reweigh the examples
such that the two samples will have the same cumulative weight. Next, we train a classifier
to separate between the two samples. This classifier becomes the root of the tree. It also
splits each of the samples into two sets. We take all the examples that the classifier assign
to the label +1(?1), reweigh them and train another classifier to separate between the two
samples. We keep going in the same fashion until we can no longer find a classifier that
splits the data significantly better than random.
To understand why this algorithm works it is easier to look first at the case where the
mixture distributions are known. If D1 and D2 are known, we can define the L1 distance
between them as L1 (D1 , D2 ) = supA |D1 (A) ?D2 (A)|.1 It turns out that the supremum is
attained by a set A such that for any i, ?i (A) is either zero or one. Therefore, any inner
node in the tree splits the region without breaking clusters. This process proceeds until all
the points associated with a leaf are from the same cluster in which case, no classifier can
distinguish between the classes.
When working with samples, we have to tolerate some error and prevent overfitting. One way
to see that is to look at the problem of approximating the L1 distance between
D1 and D2
A?S
? 1 = supA 1 ? A?S
2 .
using samples S1 and S2 . One possible way to do that is to define L
S1
S2
However, this estimate is almost surely going to be 1 if the underlying distributions are
absolutely continuous. Therefore, one has to restrict the class from which A can be selected
to a class of VC dimension small enough compared to the sizes of the samples. We claim
that asymptotically, as the sizes of the samples increase, one can increase the complexity of
the class until the clusters can be separated.
Before we proceed, we recall a result of [2] that shows the relation between classification
and the L1 distance. We will abuse the notation and treat A both as a subset and as a
classifier. If we mix D1 and D2 with equal weights then
err (A)
= D1 (X \ A) + D2 (A)
= 1 ? D1 (A) + D2 (A)
= 1 ? (D1 (A) ? D2 (A)) .
Therefore, minimizing the error is equivalent to maximizing the L1 distance.
1
the supremum is over all the measurable sets.
5
Algorithm 2 Double Sample Clustering (DSC)
Inputs:
? Samples S1 , S2
? A binary learning algorithm L that given samples S1 , S2 with weights w1 , w2 finds
a classifier h and an estimator e of the error of h.
? A threshold ? > 0.
Outputs:
? A tree of classifiers
Algorithm:
1. Let w1 = 1 and w2 = |S1 |/|S2 |
2. Apply L to S1 & S2 with weights w1 & w2 to get the classifier h and estimator e.
1
2 ? ?,
3. If e ?
(a) return a tree with a single leaf.
4. else
(a) For j = 1, 2, let Sj+ = {x ? Sj s.t. h (x) > 0}
(b) For j = 1, 2, let Sj? = {x ? Sj s.t. h (x) < 0}
(c) Let T + be the tree returned by the DSC algorithm applied to S1+ and S2+
(d) Let T ? be the tree returned by the DSC algorithm applied to S1? and S2?
(e) return a tree in which c is at the root node and T ? is its left subtree and T +
is its right subtree
The key observation for the DSC algorithm is that if ?1i 6= ?2i , then a set A that maximizes
the L1 distance between D1 and D2 is aligned with cluster boundaries (up to a measure zero).
Furthermore, A contains all the clusters for
which ?1i > ?2i and does not contain all the
clusters for which ?1i < ?2i . Hence, if we
split the space to A and A? we have few clusters in each side. By applying the same trick
recursively in each side we keep on bisecting
the space according to cluster boundaries
until subspaces that contain only a single
cluster remain. These sub-spaces cannot be
further separated and hence the algorithm
will stop. Figure 2 demonstrates this idea.
The following lemma states this argument
mathematically:
P
Figure 2: Demonstration of the DSC algoLemma 1. If Dj = i ?ji ?i then
rithm. Assume that ?1 = (0.4, 0.3, 0.3) for
1. P
L1 (D1 , D2 )
? the orange, 2green and blue regions respec
tively and ? = (0.5, 0.1, 0.4). The green re1
2
i max ?i ? ?i , 0 .
gion maximizes the L1 distance and therefore
2. If A? = ?i:?1i >?2i Ci then D1 (A? ) ? will be separated from the blue and orange.
P
Conditioned on these two regions, the mixture
D2 (A? ) = i max ?1i ? ?2i , 0 .
coefficients are ?1orange, blue = (4/7, 3/7) and
3. If ?i, ?1i 6= ?2i and A is such that ?2orange, blue = (5/9, 4/9). The region that
D1 (A)?D2 (A) = L1 (D1 , D2 ) then maximized this conditional L is the orange
1
?i, ?i (A?A? ) = 0.
regions that will be separated from the blue.
We conclude from Lemma 1 that if D1 and
D2 were explicitly known and one could have found a classifier that best separates between
the distributions, that classifier would not break clusters as long as the mixing coefficients
6
are not identical. In order for this to hold when the separation is applied recursively in the
DSC algorithm it suffices to have that for every I ? [1, . . . , K] if |I| > 1 and i ? I then
?1i
1
i0 ?I ?i0
P
6= P
?2i
i0 ?I
?2i0
to guarantee that at any stage of the algorithm clusters will not be split by the classifier
(but may be sections of measure zero). This is also sufficient to guarantee that the leaves
will contain single clusters.
In the case where data is provided through a finite sample then some book-keeping is
needed. However, the analysis follows the same path. We show that with samples large
enough, clusters are only minimally broken. For this to hold we require that the learning
algorithm L separates the clusters according to this definition:
Definition 1. For I ? [1, . . . , K] let cI : X 7? {?1} be such that cI (x) = 1 if x ? ?i?I Ci
and cI (x) = ?1 otherwise. A learning algorithm L separates C1 , . . . , CK if for every , ? > 0
there exists N such that for every n > N and every measure ? over X ?{?1} with probability
1 ? ? over samples from ? n :
1. The algorithm L returns an hypothesis h : X 7? {?1} and an error estimator
e ? [0, 1] such that |Prx,y?? [h (x) 6= y] ? e| ?
2. h is such that
?I,
Pr [h (x) 6= y] < Pr [cI (x) 6= y] + .
x,y??
x,y??
Before we introduce the main statement, we define what it means for a tree to cluster the
mixture components:
Definition 2. A clustering tree is a tree in which in each internal node is a classifier and
the points that end in a certain leaf are considered a cluster. A clustering tree -clusters the
mixture coefficient ?1 , . . . , ?K if for every i ? 1, . . . , K there exists a leaf in the tree such
that the cluster L ? X associated with this leaf is such that ?i (L) ? 1 ? and ?i0 (L) <
for every i0 6= i.
To be able to find a clustering tree, the two mixtures have to be different. The following
definition captures the gap which is the amount of difference between the mixtures.
Definition 3. Let ?1 and ?2 be two mixture vectors. The gap, g, between them is
?1i
?2i
P
P
g = min
.
1 ?
2 : I ? [1, . . . , K] and |I| > 1
i0 ?I ?i0
i0 ?I ?i0
We say that ? is b bounded away from zero if b ? mini ?i .
Theorem 2. Assume that L separates ?1 , . . . , ?K , there is a gap g > 0 between ?1 and ?2
and both ?1 and ?2 are bounded away from zero by b > 0. For every ? , ? ? > 0 there exists
N = N (? , ? ? , g, b, K) such that given two random samples of sizes N < n1 , n2 from the two
mixtures, with probability of at least 1 ? ? ? the DSC algorithm will return a clustering tree
which ? -clusters ?1 , . . . , ?K when applied with the threshold ? = g/8.
4
Empirical evidence
We conducted several experiments with synthetic data to compare different methods when
clustering in high dimensional spaces. The synthetic data was generated from three Gaussians with centers at points (0, 0) , (3, 0) and (?3, +3). On top of that, we added additional
dimensions with normally distributed noise. In the first experiment we used unit variance
for all dimensions. In the second experiment we skewed the distribution so that the variance
in the other features is 5.
Two sets of mixing coefficients for the three Gaussians were chosen at random 100 times by
selecting three uniform values from [0, 1] and normalizing them to sum to 1. We generated
7
Random Projection
K-Means
Maximal Variance
MSP
DSC
67
62
57
52
50
48
46
44
47
42
42
40
37
38
0
2000
4000
6000
8000
10000
12000
0
(a) Accuracy with spherical Gaussians
2000
4000
6000
8000
10000
12000
(b) Average accuracy with skewed Gaussians
Figure 3: Comparison the different algorithms: The dimension of the problem is
presented in the X axis and the accuracy on the Y axis.
two samples with 80 examples each from the two mixing coefficients. The DSC and MSP
algorithm received these two samples as inputs while the reference algorithms, which are
not designed to use multiple samples, received the combined set of 160 points as input.
We ran 100 trials. In each trial, each of the algorithms finds 3 Gaussians. We then measure
the percentage of the points associated with the true originating Gaussian after making the
best assignment of the inferred centers to the true Gaussians.
We compared several algorithms. K-means was used on the data as a baseline. We compared
three low dimensional projection algorithms. Following [5] we used random projections as
the first of these. Second, following [11] we used PCA to project on the maximal variance
subspace. MSP was used as the third projection algorithm. In all projection algorithm we
first projected on a one dimensional space and then applied K-means to find the clusters.
Finally, we used the DSC algorithm. The DSC algorithm uses the classregtree function in
MATLAB as its learning oracle. Whenever K-means was applied, the MATLAB implementation of this procedure was used with 10 random initial starts.
Figure 3(a) shows the results of the first experiment with unit variance in the noise dimensions. In this setting, the Maximal Variance method is expected to work well since the first
two dimensions have larger expected variance. Indeed we see that this is the case. However,
when the number of dimensions is large, MSP and DSC outperform the other methods;
this corresponds to the difficult regime of low signal to noise ratio. In 12800 dimensions,
MSP outperforms Random Projections 90% of the time, Maximal Variance 80% of the time,
and K-means 79% of the time. DSC outperforms Random Projections, Maximal Variance
and K-means 84%, 69%, and 66% of the time respectively. Thus the p-value in all these
experiments is < 0.01.
Figure 3(b) shows the results of the experiment in which the variance in the noise dimensions
is higher which creates a more challanging problem. In this case, we see that all the reference
methods suffer significantly, but the MSP and the DSC methods obtain similar results as in
the previous setting. Both the MSP and the DSC algorithms win over Random Projections,
Maximal Variance and K-means more than 78% of the time when the dimension is 400 and
up. The p-value of these experiments is < 1.6 ? 10?7 .
5
Conclusions
The mixture problem examined here is closely related to the problem of clustering. Most
clustering data can be viewed as points generated from multiple underlying distributions or
generating functions, and clustering can be seen as the process of recovering the structure
of or assignments to these distributions. We presented two algorithms for the mixture
problem that can be viewed as clustering algorithms. The MSP algorithm uses multiple
samples to find a low dimensional space to project the data to. The DSC algorithm builds
a clustering tree assuming that the clusters are disjoint. We proved that these algorithms
work under milder assumptions than currently known methods. The key message in this
work is that when multiple samples are available, often it is best not to pool the data into
one large sample, but that the structure in the different samples can be leveraged to improve
clustering power.
8
References
[1] Mikhail Belkin and Kaushik Sinha, Polynomial learning of distribution families, Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, IEEE,
2010, pp. 103?112.
[2] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira, Analysis of representations for domain adaptation, Advances in neural information processing systems
19 (2007), 137.
[3] David M Blei, Andrew Y Ng, and Michael I Jordan, Latent dirichlet allocation, the
Journal of machine Learning research 3 (2003), 993?1022.
[4] Kamalika Chaudhuri and Satish Rao, Learning mixtures of product distributions using
correlations and independence, Proc. of COLT, 2008.
[5] Sanjoy Dasgupta, Learning mixtures of gaussians, Foundations of Computer Science,
1999. 40th Annual Symposium on, IEEE, 1999, pp. 634?644.
[6] Adam Tauman Kalai, Ankur Moitra, and Gregory Valiant, Efficiently learning mixtures
of two gaussians, Proceedings of the 42nd ACM symposium on Theory of computing,
ACM, 2010, pp. 553?562.
[7] Ravindran Kannan, Hadi Salmasian, and Santosh Vempala, The spectral method for
general mixture models, Learning Theory, Springer, 2005, pp. 444?457.
[8] Ankur Moitra and Gregory Valiant, Settling the polynomial learnability of mixtures of
gaussians, Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, IEEE, 2010, pp. 93?102.
[9] Christos H Papadimitriou, Hisao Tamaki, Prabhakar Raghavan, and Santosh Vempala, Latent semantic indexing: A probabilistic analysis, Proceedings of the seventeenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, ACM, 1998, pp. 159?168.
[10] Arora Sanjeev and Ravi Kannan, Learning mixtures of arbitrary gaussians, Proceedings
of the thirty-third annual ACM symposium on Theory of computing, ACM, 2001,
pp. 247?257.
[11] Santosh Vempala and Grant Wang, A spectral algorithm for learning mixtures of distributions, Foundations of Computer Science, 2002. Proceedings. The 43rd Annual IEEE
Symposium on, IEEE, 2002, pp. 113?122.
9
| 4915 |@word mild:1 trial:2 polynomial:4 nd:1 d2:13 sepa:1 recursively:2 moment:4 initial:1 series:1 contains:1 selecting:3 genetic:1 document:4 outperforms:2 err:1 current:1 com:2 comparing:1 john:1 realistic:1 j1:1 dive:1 designed:2 grass:1 generative:2 leaf:7 greedy:1 selected:1 record:2 blei:1 node:3 location:2 constructed:1 symposium:7 focs:2 prove:2 introduce:1 acquired:1 ravindran:1 indeed:1 expected:2 multi:3 spherical:2 becomes:1 project:8 provided:1 underlying:11 notation:2 moreover:2 bounded:4 mass:1 maximizes:2 what:1 finding:4 nj:5 guarantee:2 every:8 classifier:15 demonstrates:1 control:1 normally:1 medical:2 unit:2 appear:4 grant:1 before:3 treat:1 consequence:1 path:1 abuse:1 might:1 minimally:1 studied:1 examined:1 ankur:2 co:3 ease:1 seventeenth:1 thirty:1 ered:1 procedure:1 empirical:1 significantly:2 projection:22 word:9 get:2 cannot:4 close:3 onto:1 minj6:1 impossible:1 seminal:1 applying:1 measurable:2 equivalent:1 demonstrated:2 center:6 maximizing:1 exposure:1 starting:1 convex:2 bachrach:1 identifying:1 estimator:3 spanned:7 embedding:2 coordinate:1 us:3 hypothesis:1 associate:1 trick:1 jk:1 database:1 bottom:1 role:1 wang:2 capture:1 region:5 ran:2 disease:4 broken:1 complexity:3 ration:1 solving:1 creates:1 bisecting:1 easily:1 represented:2 train:2 separated:5 choosing:1 apparent:1 stanford:3 solve:1 larger:3 say:2 otherwise:1 maximal:7 product:1 adaptation:1 aligned:2 combining:1 mixing:7 chaudhuri:2 cluster:28 double:3 requirement:1 prabhakar:1 generating:2 adam:1 ben:1 object:2 blitzer:1 andrew:1 received:2 recovering:1 implies:1 differ:2 direction:4 closely:1 hull:2 vc:1 raghavan:1 require:6 assign:2 suffices:1 generalization:1 mathematically:1 subtypes:1 mm:8 hold:3 around:1 sufficiently:1 considered:1 exp:1 claim:1 nents:1 a2:1 proc:1 label:1 currently:1 clearly:1 gaussian:2 ck:1 kalai:1 ej:7 focus:1 improvement:1 centroid:1 baseline:1 sense:1 milder:2 voronoi:1 i0:11 typically:2 hidden:4 relation:1 originating:1 going:2 pixel:1 classification:2 colt:1 art:3 orange:5 equal:1 once:1 santosh:3 having:1 ng:1 identical:1 represents:1 look:3 koby:1 papadimitriou:1 few:1 belkin:1 preserve:2 microsoft:5 maintain:3 n1:1 message:1 mixture:48 behind:1 perimeter:1 orthogonal:1 tree:17 divide:1 theoretical:1 minimal:1 sinha:1 instance:2 modeling:3 rao:1 tial:1 caruana:1 tessellation:1 assignment:2 applicability:1 subset:1 uniform:1 socioeconomic:1 conducted:1 satish:1 too:2 learnability:1 gregory:2 synthetic:2 combined:1 st:2 density:1 geographical:1 lee:1 probabilistic:1 pool:1 michael:1 sanjeev:1 w1:3 moitra:2 leveraged:1 book:1 return:5 toy:1 coefficient:9 explicitly:1 later:1 try:2 jason:1 break:2 root:2 red:1 start:1 recover:4 sort:1 shai:1 accuracy:3 hadi:1 variance:17 efficiently:1 maximized:1 yield:1 identify:3 generalize:1 salmasian:2 served:1 whenever:1 definition:6 pp:8 dm:3 proof:1 associated:3 stop:1 proved:1 recall:1 dimensionality:1 cj:3 reweigh:2 appears:1 attained:1 tolerate:1 higher:1 planar:1 improved:1 done:1 furthermore:1 stage:4 correlation:2 until:4 working:1 overlapping:2 building:1 usa:3 contain:5 true:2 hence:5 semantic:1 climate:1 skewed:2 kaushik:1 criterion:1 demonstrate:1 l1:9 image:4 common:1 ji:6 dsc:20 tively:1 jdl17:1 association:1 he:1 discussed:1 refer:1 rd:1 pm:1 dj:4 access:2 similarity:2 longer:1 something:1 re1:1 scenario:2 certain:1 binary:1 preserving:1 seen:1 greater:1 additional:1 surely:1 fernando:1 signal:1 multiple:11 mix:1 x10:1 long:3 hazard:1 e1:1 patient:2 gilad:1 cell:1 c1:1 background:1 else:1 w2:3 strict:1 sigact:1 pooling:1 seem:1 jordan:1 split:5 enough:4 independence:2 affect:1 restrict:2 rang:1 reduce:2 idea:2 tm:8 inner:1 pca:3 suffer:1 returned:2 proceed:1 matlab:2 amount:1 extensively:1 generate:1 outperform:1 percentage:1 disjoint:11 blue:5 discrete:1 dasgupta:2 group:1 key:3 kanan:1 nevertheless:1 threshold:2 drawn:2 prevent:1 ravi:1 tamaki:1 asymptotically:1 sum:2 almost:3 reasonable:1 family:1 separation:11 patch:7 recov:1 appendix:1 summarizes:1 bound:1 distinguish:1 oracle:1 annual:5 argument:1 min:7 span:5 vempala:5 according:3 across:1 remain:1 em:1 making:1 s1:11 projecting:1 restricted:1 pr:4 indexing:1 heart:1 taken:1 previously:1 discus:2 turn:1 needed:4 msp:21 supj:1 end:3 demographic:1 available:2 gaussians:20 apply:1 observe:1 away:2 spectral:3 generic:1 occurrence:2 original:1 top:2 clustering:16 assumes:4 running:1 denotes:1 dirichlet:1 sigmod:1 build:1 approximating:1 classical:1 already:1 added:1 win:1 subspace:2 distance:12 separate:6 topic:7 reason:1 kannan:3 assuming:1 useless:1 mini:1 ratio:2 minimizing:1 demonstration:1 gion:1 difficult:1 mostly:1 statement:1 sigart:1 negative:1 implementation:1 unknown:1 observation:1 sm:3 finite:4 frame:1 rn:1 supa:2 arbitrary:1 community:1 inferred:1 david:2 required:3 challanging:1 address:4 able:2 redmond:2 suggested:1 proceeds:1 regime:1 max:20 green:2 power:1 settling:1 improve:2 picture:3 arora:2 axis:3 prior:2 nice:1 allocation:1 foundation:4 affine:8 sufficient:1 principle:1 summary:1 keeping:2 side:2 allow:1 understand:1 face:1 mikhail:1 tauman:1 distributed:1 boundary:2 dimension:21 cumulative:1 rich:1 author:1 collection:1 projected:3 preprocessing:1 sj:6 status:1 keep:2 supremum:2 overfitting:1 assumed:2 conclude:1 spectrum:1 continuous:2 latent:2 why:1 table:2 learn:2 nature:1 domain:1 vj:3 did:1 spread:1 main:3 s2:8 noise:4 pby:1 n2:1 prx:1 rithm:1 fashion:2 christos:1 sub:4 pereira:1 breaking:1 third:2 theorem:5 maxi:1 evidence:1 normalizing:1 exists:4 kamalika:1 valiant:2 ci:6 subtree:2 conditioned:1 gap:4 easier:1 likely:3 intern:1 springer:1 corresponds:1 environmental:1 acm:6 conditional:1 goal:3 viewed:2 hard:1 respec:1 reducing:1 lemma:3 principal:1 hospital:4 sanjoy:1 internal:1 support:10 crammer:1 absolutely:1 d1:17 |
4,327 | 4,916 | Approximate Inference in Continuous
Determinantal Point Processes
Raja Hafiz Affandi1 , Emily B. Fox2 , and Ben Taskar2
1
2
University of Pennsylvania, [email protected]
University of Washington, {ebfox@stat,taskar@cs}.washington.edu
Abstract
Determinantal point processes (DPPs) are random point processes well-suited for
modeling repulsion. In machine learning, the focus of DPP-based models has been
on diverse subset selection from a discrete and finite base set. This discrete setting
admits an efficient sampling algorithm based on the eigendecomposition of the
defining kernel matrix. Recently, there has been growing interest in using DPPs
defined on continuous spaces. While the discrete-DPP sampler extends formally to
the continuous case, computationally, the steps required are not tractable in general.
In this paper, we present two efficient DPP sampling schemes that apply to a wide
range of kernel functions: one based on low rank approximations via Nystr?om
and random Fourier feature techniques and another based on Gibbs sampling. We
demonstrate the utility of continuous DPPs in repulsive mixture modeling and
synthesizing human poses spanning activity spaces.
1
Introduction
Samples from a determinantal point process (DPP) [15] are sets of points that tend to be spread out.
More specifically, given ? ? Rd and a positive semidefinite kernel function L : ? ? ? 7? R, the
probability density of a point configuration A ? ? under a DPP with kernel L is given by
PL (A) ? det(LA ) ,
(1)
where LA is the |A| ? |A| matrix with entries L(x, y) for each x, y ? A. The tendency for repulsion
is captured by the determinant since it depends on the volume spanned by the selected points in the
associated Hilbert space of L. Intuitively, points similar according to L or points that are nearly
linearly dependent are less likely to be selected.
Building on the foundational work in [5] for the case where ? is discrete and finite, DPPs have been
used in machine learning as a model for subset selection in which diverse sets are preferred [2, 3,
9, 12, 13]. These methods build on the tractability of sampling based on the algorithm of Hough et
al. [10], which relies on the eigendecomposition of the kernel matrix to recursively sample points
based on their projections onto the subspace spanned by the selected eigenvectors.
Repulsive point processes, like hard core processes [7, 16], many based on thinned Poisson processes
and Gibbs/Markov distributions, have a long history in the spatial statistics community, where
considering continuous ? is key. Many naturally occurring phenomena exhibit diversity?trees tend
to grow in the least occupied space [17], ant hill locations are over-dispersed relative to uniform
placement [4] and the spatial distribution of nerve fibers is indicative of neuropathy, with hard-core
processes providing a critical tool [25]. Repulsive processes on continuous spaces have garnered
interest in machine learning as well, especially relating to generative mixture modeling [18, 29].
The computationally attractive properties of DPPs make them appealing to consider in these applications. On the surface, it seems that the eigendecomposition and projection algorithm of [10] for
discrete DPPs would naturally extend to the continuous case. While this is true in a formal sense as L
1
becomes an operator instead of a matrix, the key steps such as the eigendecomposition of the kernel
and projection of points on subspaces spanned by eigenfunctions are computationally infeasible
except in a few very limited cases where approximations can be made [14]. The absence of a tractable
DPP sampling algorithm for general kernels in continuous spaces has hindered progress in developing
DPP-based models for repulsion.
In this paper, we propose an efficient algorithm to sample from DPPs in continuous spaces using
low-rank approximations of the kernel function. We investigate two such schemes: Nystr?om and
random Fourier features. Our approach utilizes a dual representation of the DPP, a technique that has
proven useful in the discrete ? setting as well [11]. For k-DPPs, which only place positive probability
on sets of cardinality k [13], we also devise a Gibbs sampler that iteratively samples points in the
k-set conditioned on all k ? 1 other points. The derivation relies on representing the conditional
DPPs using the Schur complement of the kernel. Our methods allow us to handle a broad range of
typical kernels and continuous subspaces, provided certain simple integrals of the kernel function
can be computed efficiently. Decomposing our kernel into quality and similarity terms as in [13],
this includes, but is not limited to, all cases where the (i) spectral density of the quality and (ii)
characteristic function of the similarity kernel can be computed efficiently. Our methods scale well
with dimension, in particular with complexity growing linearly in d.
In Sec. 2, we review sampling algorithms for discrete DPPs and the challenges associated with
sampling from continuous DPPs. We then propose continuous DPP sampling algorithms based on
low-rank kernel approximations in Sec. 3 and Gibbs sampling in Sec. 4. An empirical analysis of the
two schemes is provided in Sec. 5. Finally, we apply our methods to repulsive mixture modeling and
human pose synthesis in Sec. 6 and 7.
2
Sampling from a DPP
When ? is discrete with cardinality N , an efficient algorithm for sampling from a DPP is given
in [10]. The algorithm, which is detailed in the supplement, uses an eigendecomposition of the
PN
kernel matrix L = n=1 ?n vn vn> and recursively samples points xi as follows, resulting in a set
A ? DPP(L) with A = {xi }:
Phase 1 Select eigenvector vn with probability
?n
?n +1 .
Let V be the selected eigenvectors (k = |V |).
Phase 2 For i = 1, . . . , k, sample points xi ? ? sequentially with probability based on the projection
of xi onto the subspace spanned by V . Once xi is sampled, update V by excluding the
subspace spanned by the projection of xi onto V .
When ? is discrete, both steps are straightforward since the first phase involves eigendecomposing a
kernel matrix and the second phase involves sampling from discrete probability distributions based
on inner products between points and eigenvectors. Extending this algorithm to a continuous space
was considered by [14], but for a very limited set of kernels L and spaces ?. For general L and ?,
we face difficulties in both phases. Extending Phase 1 to a continuous space requires knowledge of
the eigendecomposition of the kernel function. When ? is a compact rectangle in Rd , [14] suggest
approximating the eigendecomposition using an orthonormal Fourier basis.
Even if we are able to obtain the eigendecomposition of the kernel function (either directly or via
approximations as considered in [14] and Sec. 3), we still need to implement Phase 2 of the sampling
algorithm. Whereas the discrete case only requires sampling from a discrete probability function,
here we have to sample from a probability density. When ? is compact, [14] suggest using a rejection
sampler with a uniform proposal on ?. The authors note that the acceptance rate of this rejection
sampler decreases with the number of points sampled, making the method inefficient in sampling large
sets from a DPP. In most other cases, implementing Phase 2 even via rejection sampling is infeasible
since the target density is in general non-standard with unknown normalization. Furthermore, a
generic proposal distribution can yield extremely low acceptance rates.
In summary, current algorithms can sample approximately from a continuous DPP only for translationinvariant kernels defined on a compact space. In Sec. 3, we propose a sampling algorithm that allows
us to sample approximately from DPPs for a wide range of kernels L and spaces ?.
2
3
Sampling from a low-rank continuous DPP
Again considering ? discrete with cardinality N , the sampling algorithm of Sec. 2 has complexity
dominated by the eigendecomposition, O(N 3 ). If the kernel matrix L is low-rank, i.e. L = B > B,
with B a D ? N matrix and D N , [11] showed that the complexity of sampling can be reduced to
O(N D2 + D3 ). The basic idea is to exploit the fact that L and the dual kernel matrix C = BB > ,
which is D ? D, share the same nonzero eigenvalues, and for each eigenvector vk of L, Bvk is the
corresponding eigenvector of C. See the supplement for algorithmic details.
While the dependence on N in the dual is sharply reduced, in continuous spaces, N is infinite. In
order to extend the algorithm, we must find efficient ways to compute C for Phase 1 and manipulate
eigenfunctions implicitly for the projections in Phase
2. Generically, consider sampling from a DPP
P?
on a continuous space ? with kernel L(x, y) = n=1 ?n ?n (x)?n (y),where ?n and ?n (x) are
eigenvalues and eigenfunctions, and ?n (y) is the complex conjugate of ?n (y). Assume that we can
approximate L by a low-dimensional (generally complex-valued) mapping, B(x) : ? 7? CD :
?
L(x,
y) = B(x)? B(y) , where B(x) = [B1 (x), . . . , BD (x)]> .
(2)
Here, A? denotes complex conjugate transpose of A. We consider two efficient low-rank approximation schemes in Sec. 3.1 and 3.2. Using such a low-rank representation, we propose an analog of
the dual sampling algorithm for continuous spaces, described in Algorithm 1. A similar algorithm
provides samples from a k-DPP, which only gives positive probability to sets of a fixed cardinality
k [13]. The only change required is to the for-loop in Phase 1 to select exactly k eigenvectors using
an efficient O(Dk) recursion. See the supplement for details.
Algorithm 1 Dual sampler for a low-rank continuous DPP
?
Input: L(x,
y) = B(x)? B(y),
PHASE 2
a rank-D DPP kernel
X??
PHASE 1
while |V | > 0 do
R
P
Compute C = ? B(x)B(x)? dx
? from f (x) = |V1 | v?V |v? B(x)|2
Sample x
PD
X ? X ? {?
x}
Compute eigendecomp. C = k=1 ?k vk vk?
Let v0 be a vector in V such that v0? B(?
x) 6= 0
J ??
v? B(?
x)
for k = 1, . . . , D do
v
|
v
?
V
?
{v0 }}
Update
V
?
{v
?
v0? B(?
x) 0
k
J ? J ? {k} with probability ?k?+1
?
Orthonormalize V w.r.t. hv1 , v2 i = v1 Cv2
V ? { ? v?k }k?J
Output: X
vk Cvk
In this dual view, we still have the same two-phase structure, and must address two key challenges:
Phase 1 Assuming a low-rank kernel function decomposition as in Eq. (2), we need to able to
compute the dual kernel matrix, given by an integral:
Z
C=
B(x)B(x)? dx .
(3)
?
Phase 2 In general, sampling directly from the density f (x) is difficult; instead, we can compute the
cumulative distribution function (CDF) and sample x using the inverse CDF method [21]:
d Z x
?l
Y
F (?
x = (?
x1 , . . . , x
?d )) =
f (x)1{xl ??} dxl .
(4)
l=1
??
? is finite-rank and (ii) the terms C and f (x) are computable,
Assuming (i) the kernel function L
? In what follows, approximations only
Algorithm 1 provides exact samples from a DPP with kernel L.
? If given a finite-rank kernel L
arise from approximating general kernels L with low-rank kernels L.
to begin with, the sampling procedure is exact.
One could imagine approximating L as in Eq. (2) by simply truncating the eigendecomposition
(either directly or using numerical approximations). However, this simple approximation for known
decompositions does not necessarily yield a tractable sampler, because the products of eigenfunctions
required in Eq. (3) might not be efficiently integrable. For our approximation algorithm to work, not
only do we need methods that approximate the kernel function well, but also that enable us to solve
Eq. (3) and (4) directly for many different kernel functions. We consider two such approaches that
enable an efficient sampler for a wide range of kernels: Nystr?om and random Fourier features.
3
3.1
Sampling from RFF-approximated DPP
Random Fourier features (RFF) [19] is an approach for approximating shift-invariant kernels,
k(x, y) = k(x ? y), using randomly selected frequencies. The frequencies are sampled independently from the Fourier transform of the kernel function, ?j ? F(k(x ? y)), and letting:
D
X
? ? y) = 1
k(x
exp{i?j> (x ? y)} ,
D j=1
x, y ? ? .
(5)
To apply RFFs, we factor L into a quality function q and similarity kernel k (i.e., q(x) =
p
L(x, x)):
x, y ? ? where k(x, x) = 1.
L(x, y) = q(x)k(x, y)q(y) ,
(6)
The RFF approximation can be applied to cases where the similarity function has a known characteristic function, e.g., Gaussian, Laplacian and Cauchy. Using Eq. (5), we can approximate the
similarity kernel function to obtain a low-rank kernel and dual matrix:
Z
D
X
1
RF F
? RF F (x, y) = 1
L
q(x) exp{i?j> (x ? y)}q(y), Cjk
=
q 2 (x) exp{i(?j ? ?k )> x}dx.
D j=1
D ?
The CDF of the sampling distribution f (x) in Algorithm 1 is given by:
D D
d Z x
?l
Y
1 X XX
?
vj vk
FRF F (?
x) =
q 2 (x) exp{i(?j ? ?k )> x}1{xl ??} dxl .
|V |
??
j=1
v?V
k=1
(7)
l=1
where vj denotes the jth element of vector v. Note that equations C RF F and FRF F can be computed
for many different combinations of ? and q(x). In fact, this method works for any combination
of (i) translation-invariant similarity kernel k with known characteristic function and (ii) quality
function q with known spectral density. The resulting kernel L need not be translation invariant. In
the supplement, we illustrate this method by considering a common and important example where
? = Rd , q(x) is Gaussian, and k(x, y) is any kernel with known Fourier transform.
3.2 Sampling from a Nystr?om-approximated DPP
Another approach to kernel approximation is the Nystr?om method [27]. In particular, given z1 , . . . , zD
landmarks sampled from ?, we can approximate the kernel function and dual matrix as,
Z
D X
D
D X
D
X
X
N ys
2
? N ys (x, y) =
L
Wjk
L(x, zj )L(zk , y), Cjk
=
Wjn Wmk
L(zn , x)L(x, zm )dx,
n=1 m=1
j=1 k=1
where Wjk = L(zj , zk )?1/2 . Denoting wj (v) =
FN ys (?
x) =
PD
v?V
Wjn vn , the CDF of f (x) in Alg. 1 is:
n=1
D D
1 X XX
wj (v)wk (v)
|V |
j=1
k=1
d Z
Y
l=1
?
x
?l
L(x, zj )L(zk , x)1{xl ??} dxl .
(8)
??
As with the RFF case, we consider a decomposition L(x, y) = q(x)k(x, y)q(y). Here, there are no
translation-invariant requirements, even for the similarity kernel k. In the supplement, we provide the
important example where ? = Rd and both q(x) and k(x, y) are Gaussians and also when k(x, y) is
polynomial, a case that cannot be handled by RFF since it is not translationally invariant.
4
Gibbs sampling
For k-DPPs, we can consider a Gibbs sampling scheme. In the supplement, we derive that the full
conditional for the inclusion of point xk given the inclusion of the k ? 1 other points is a 1-DPP with a
modified kernel, which we know how to sample from. Let the kernel function be represented as before:
L(x, y) = q(x)k(x, y)q(y). Denoting J \k = {xj }j6=k and M \k = L?1
the full conditional can
J \k
be simplified using Schur?s determinantal equality [22]:
X \k
p(xk |{xj }j6=k ) ? L(xk , xk ) ?
Mij L(xi , xk )L(xj , xk ).
(9)
i,j6=k
4
0.4
0.2
0
0
1
2
2
3
4
5
1
0.8
0.6
Nystrom
RFF
0.4
0.2
0
0
1
2
2
3
4
5
1
1
Nystrom
RFF
0.8
Eigenvalues
Nystrom
RFF
Variational Distance
0.6
Variational Distance
Variational Distance
1
0.8
0.6
0.4
0.2
0
0
1
2
2
?
?
?
(a)
(b)
(c)
3
4
5
d=1
d=5
d=10
0.8
0.6
0.4
0.2
0
0
20
40
60
80
100
(d)
Figure 1: Estimates of total variational distance for Nystr?om and RFF approximation methods to a DPP with
Gaussian quality and similarity with covariances ? = diag(?2 , . . . , ?2 ) and ? = diag(? 2 , . . . , ? 2 ), respectively.
(a)-(c) For dimensions d=1, 5 and 10, each plot considers ?2 = 1 and varies ? 2 . (d) Eigenvalues for the Gaussian
kernels with ? 2 = ?2 = 1 and varying dimension d.
In general, sampling directly from this full conditional is difficult. However, for a wide range of
kernel functions, including those which can be handled by the Nystr?om approximation in Sec. 3.2,
the CDF can be computed analytically and xk can be sampled using the inverse CDF method:
R x? l
P
\k
L(xl , xl ) ? i,j6=k Mij L(xi , xl )L(xj , xl )1{xl ??} dxl
??
F (?
xl |{xj }j6=k ) =
(10)
R
P
\k
L(x, x) ? i,j6=k Mij L(xi , x)L(xj , x)dx
?
In the supplement, we illustrate this method by considering the case where ? = Rd and q(x) and
k(x, y) are Gaussians. We use this same Schur complement scheme for sampling from the full
conditionals in the mixture model application of Sec. 6. A key advantage of this scheme for several
types of kernels is that the complexity of sampling scales linearly with the number of dimensions d
making it suitable in handling high-dimensional spaces.
As with any Gibbs sampling scheme, the mixing rate is dependent on the correlations between
variables. In cases where the kernel introduces low repulsion we expect the Gibbs sampler to mix well,
while in a high repulsion setting the sampler can mix slowly due to the strong dependencies between
points and fact that we are only doing one-point-at-a-time moves. We explore the dependence of
convergence on repulsion strength in the supplementary materials. Regardless, this sampler provides
a nice tool in the k-DPP setting. Asymptotically, theory suggests that we get exact (though correlated)
samples from the k-DPP. To extend this approach to standard DPPs, we can first sample k (this
assumes knowledge of the eigenvalues of L) and then apply the above method to get a sample. This is
fairly inefficient if many samples are needed. A more involved but potentially efficient approach is to
consider a birth-death sampling scheme where the size of the set can grow/shrink by 1 at every step.
5
Empirical analysis
To evaluate the performanceP
of the RFF and Nystr?om approximations, we compute the total variational
distance kPL ? PL? k1 = 12 X |PL (X) ? PL? (X)|, where PL (X) denotes the probability of set X
under a DPP with kernel L, as given by Eq. (1). We restrict our analysis to the case where the quality
function and similarity kernel are Gaussians with isotropic covariances ? = diag(?2 , . . . , ?2 ) and ? =
diag(? 2 , . . . , ? 2 ), respectively, enabling our analysis based on the easily computed eigenvalues [8].
We also focus on sampling from k-DPPs for which the size of the set X is always k. Details are in
the supplement.
Fig. 1 displays estimates of the total variational distance for the RFF and Nystr?om approximations
when ?2 = 1, varying ? 2 (the repulsion strength) and the dimension d. Note that the RFF method
performs slightly worse as ? 2 increases and is rather invariant to d while the Nystr?om method
performs much better for increasing ? 2 but worse for increasing d.
While this phenomenon seems perplexing at first, a study of the eigenvalues of the Gaussian kernel
across dimensions sheds light on the rationale (see Fig. 1). Note that for fixed ? 2 and ?2 , the decay
of eigenvalues is slower in higher dimensions. It has been previously demonstrated that the Nystr?om
method performs favorably in kernel learning tasks compared to RFF in cases where there is a
large eigengap in the kernel matrix [28]. The plot of the eigenvalues seems to indicate the same
phenomenon here. Furthermore, this result is consistent with the comparison of RFF to Nystr?om in
approximating DPPs in the discrete ? case provided in [3].
This behavior can also be explained by looking at the theory behind these two approximations.
For the RFF, while the kernel approximation is guaranteed to be an unbiased estimate of the true
kernel element-wise, the variance is fairly high [19]. In our case, we note that the RFF estimates of
minors are biased because of non-linearity in matrix entries, overestimating probabilities for point
5
configurations that are more spread out, which leads to samples that are overly-dispersed. For the
Nystr?om method, on the other hand, the quality of the approximation depends on how well the
landmarks cover ?. In our experiments the landmarks are sampled i.i.d. from q(x). When either the
similarity bandwidth ? 2 is small or the dimension d is high, the effective distance between points
increases, thereby decreasing the accuracy of the approximation. Theoretical bounds for the Nystr?om
DPP approximation in the case when ? is finite are provided in [3]. We believe the same result holds
for continuous ? by extending the eigenvalues and spectral norm of the kernel matrix to operator
eigenvalues and operator norms, respectively.
In summary, for moderate values of ? 2 it is generally good to use the Nystr?om approximation for
low-dimensional settings and RFF for high-dimensional settings.
6
Repulsive priors for mixture models
Mixture models are used in a wide range of applications from clustering to density estimation.
A common issue with such models, especially in density estimation tasks, is the introduction of
redundant, overlapping components that increase the complexity and reduce interpretability of the
resulting model. This phenomenon is especially prominent when the number of samples is small. In
a Bayesian setting, a common fix to this problem is to consider a sparse Dirichlet prior on the mixture
weights, which penalizes the addition of non-zero-weight components. However, such approaches
run the risk of inaccuracies in the parameter estimates [18]. Instead, [18] show that sampling the
location parameters using repulsive priors leads to better separated clusters while maintaining the
accuracy of the density estimate. They propose a class of repulsive priors that rely on explicitly
defining a distance metric and the manner in which small distances are penalized. The resulting
posterior computations can be fairly complex.
The theoretical properties of DPPs make them an appealing choice as a repulsive prior. In fact, [29]
considered using DPPs as repulsive priors in latent variable models. However, in the absence of
a feasible continuous DPP sampling algorithm, their method was restricted to performing MAP
inference. Here we propose a fully generative probabilistic mixture model using a DPP prior for the
location parameters, with a K-component model using a K-DPP.
In the common case of mixtures of Gaussians (MoG), our posterior computations can be performed
using Gibbs sampling with nearly the same simplicity of the standard case where the location
parameters ?k are assumed to be i.i.d.. In particular, with the exception of updating the location
parameters {?1 , . . . , ?K }, our sampling steps are identical to standard MoG Gibbs updates in the
uncollapsed setting. For the location parameters, instead of sampling each ?k independently from its
conditional posterior, our full conditional depends upon the other locations ?\k as well. Details are
in the supplement, where we show that this full conditional has an interpretation as a single draw
from a tilted 1-DPP. As such, we can employ the Gibbs sampling scheme of Sec. 4.
We assess the clustering and density estimation performance of the DPP-based model on both
synthetic and real datasets. In each case, we run 10,000 Gibbs iterations, discard 5,000 as burn-in
and thin the chain by 10. Hyperparameter settings are in the supplement. We randomly permute the
labels in each iteration to ensure balanced label switching. Draws are post-processed following the
algorithm of [23] to address the label switching issue.
Synthetic data To assess the role of the prior in a density estimation task, we generated a small
sample of 100 observations from a mixture of two Gaussians. We consider two cases, the first with
well-separated components and the second with poorly-separated components. We compare a mixture
model with locations sampled i.i.d. (IID) to our DPP repulsive prior (DPP). In both cases, we set
an upper bound of six mixture components. In Fig. 2, we see that both IID and DPP provide very
similar density estimates. However, IID uses many large-mass components to describe the density.
As a measure of simplicity of the resulting density description, we compute the average entropy of the
posterior mixture membership distribution, which is a reasonable metric given the similarity of the
overall densities. Lower entropy indicates a more concise representation in an information-theoretic
sense. We also assess the accuracy of the density estimate by computing both (i) Hamming distance
error relative to true cluster labels and (ii) held-out log-likelihood on 100 observations. The results are
summarized in Table 1. We see that DPP results in (i) significantly lower entropy, (ii) lower overall
clustering error, and (iii) statistically indistinguishable held-out log-likelihood. These results signify
that we have a sparser representation with well-separated (interpretable) clusters while maintaining
the accuracy of the density estimate.
6
0.5
0.5
0.4
0.4
0.7
0.7
1.6
0.6
0.6
1.4
0.5
1.2
0.5
1
0.4
0.3
0.3
0.4
0.2
0.2
0.3
0.1
0.1
0.8
0.3
0.6
0.2
0.2
0.4
0.1
0
?3
?2
?1
0
1
2
3
0.5
0
?3
?2
?1
0
1
2
3
0
?4
0.1
0.2
?2
0
2
4
0
?1
0
1
2
3
4
0.7
0.5
0
?3
?2
?1
0
1
2
3
?2
?1
0
1
2
3
?2
?1
0
1
2
3
0.7
1.6
0.6
0.6
1.4
0.4
0.4
0.5
1.2
0.5
0.3
0.3
0.4
1
0.4
0.2
0.2
0.3
0.8
0.3
0.6
0.2
0.1
0.2
0.4
0.1
0.1
0
?3
?2
?1
0
1
2
3
0.5
0
?3
?2
?1
0
1
2
3
0
?4
0.1
0.2
?2
0
2
4
0
?1
0
1
2
3
4
0.7
0.5
0
?3
0.7
1.6
0.6
0.6
1.4
0.4
0.4
0.5
1.2
0.5
0.3
0.3
0.4
1
0.4
0.2
0.3
0.2
0.8
0.3
0.6
0.2
0.1
0.2
0.4
0.1
0.1
0
?3
?2
?1
0
1
2
3
0
?3
?2
?1
0
1
2
3
0
?4
0.1
0.2
?2
0
2
4
0
?1
0
1
2
3
4
0
?3
Well-Sep
Poor-Sep
Galaxy
Enzyme
Acidity
Figure 2: For each synthetic and real dataset: (top) histogram of data overlaid with actual Gaussian mixture
generating the synthetic data, and posterior mean mixture model for (middle) IID and (bottom) DPP. Red
dashed lines indicate resulting density estimate.
Table 1: For IID and DPP on synthetic datasets: mean (stdev) for mixture membership entropy, cluster
assignment error rate and held-out log-likelihood of 100 observations under the posterior mean density estimate.
DATASET
Well-separated
Poorly-separated
ENTROPY
IID
DPP
1.11 (0.3) 0.88 (0.2)
1.46 (0.2) 0.92 (0.3)
CLUSTERING ERROR
IID
DPP
0.19 (0.1)
0.19 (0.1)
0.47 (0.1)
0.39 (0.1)
HELDOUT LOG-LIKE.
IID
DPP
-169 (6)
-171(8)
-211(10)
-207(9)
Real data We also tested our DPP model on three real density estimation tasks considered in [20]:
82 measurements of velocity of galaxies diverging from our own (galaxy), acidity measurement of 155
lakes in Wisconsin (acidity), and the distribution of enzymatic activity in the blood of 245 individuals
(enzyme). We once again judge the complexity of the density estimates using the posterior mixture
membership entropy as a proxy. To assess the accuracy of the density estimates, we performed 5-fold
cross validation to estimate the predictive held-out log-likelihood. As with the synthetic data, we
find that DPP visually results in better separated clusters (Fig. 2). The DPP entropy measure is also
significantly lower for data that are not well separated (acidity and galaxy) while the differences in
predictive log-likelihood estimates are not statistically significant (Table 2).
Finally, we consider a classification task based on the iris dataset: 150 observations from three iris
species with four length measurements. For this dataset, there has been significant debate on the
optimal number of clusters. While there are three species in the data, it is known that two have
very low separation. Based on loss minimization, [24, 26] concluded that the optimal number of
clusters was two. Table 2 compares the classification error using DPP and IID when we assume
for evaluation the real data has three or two classes (by collapsing two low-separation classes) , but
consider a model with a maximum of six components. While both methods perform similarly for
three classes, DPP has significantly lower classification error under the assumption of two classes,
since DPP places large posterior mass on only two mixture components. This result hints at the
possibility of using the DPP mixture model as a model selection method.
7
Generating diverse sample perturbations
We consider another possible application of continuous-space sampling. In many applications of
inverse reinforcement learning or inverse optimal control, the learner is presented with control
trajectories executed by an expert and tries to estimate a reward function that would approximately
reproduce such policies [1]. In order to estimate the reward function, the learner needs to compare
the rewards of a large set of trajectories (or all, if possible), which becomes intractable in highdimensional spaces with complex non-linear dynamics. A typical approximation is to use a set of
perturbed expert trajectories as a comparison set, where a good set of trajectories should cover as
large a part of the space as possible.
7
Table 2: For IID and DPP, mean (stdev) of (left) mixture membership entropy and held-out log-likelihood for
three density estimation tasks and (right) classification error under 2 vs. 2 of true classes for the iris data.
DATA
Galaxy
Acidity
Enzyme
ENTROPY
IID
DPP
0.89 (0.2) 0.74 (0.2)
1.32 (0.1) 0.98 (0.1)
1.01 (0.1) 0.96 (0.1)
HELDOUT LL.
IID
DPP
-20(2) -21(2)
-49 (2) -48(3)
-55(2) -55(3)
DATA
Iris (3 cls)
Iris (2 cls)
CLASS ERROR
IID
DPP
0.43 (0.02) 0.43 (0.02)
0.23 (0.03) 0.15 (0.03)
Coverage Rate
1
0.8
0.6
0.4
0.2
0
0
Original
DPP
IID
50
100
?
DPP Samples
Figure 3: Left: Diverse set of human poses relative to an original pose by sampling from an RFF (top) and
Nystr?om (bottom) approximations with kernel based on MoCap of the activity dance. Right: Fraction of data
having a DPP/i.i.d. sample within an neighborhood.
We propose using DPPs to sample a large-coverage set of trajectories, in particular focusing on a
human motion application where we assume a set of motion capture (MoCap) training data taken
from the CMU database [6]. Here, our dimension d is 62, corresponding to a set of joint angle
measurements. For a given activity, such as dancing, we aim to select a reference pose and synthesize
a set of diverse, perturbed poses. To achieve this, we build a kernel with Gaussian quality and
similarity using covariances estimated from the training data associated with the activity. The
Gaussian quality is centered about the selected reference pose and we synthesize new poses by
sampling from our continuous DPP using the low-rank approximation scheme. In Fig. 3, we show
an example of such DPP-synthesized poses. For the activity dance, to quantitatively assess our
performance in covering the activity space, we compute a coverage rate metric based on a random
sample of 50 poses from a DPP. For each training MoCap frame, we compute whether the frame has
a neighbor in the DPP sample within an neighborhood. We compare our coverage to that of i.i.d.
sampling from a multivariate Gaussian chosen to have variance matching our DPP sample. Despite
favoring the i.i.d. case by inflating the variance to match the diverse DPP sample, the DPP poses still
provide better average coverage over 100 runs. See Fig. 3 (right) for an assessment of the coverage
metric. A visualization of the samples is in the supplement. Note that the i.i.d. case requires on
average = 253 to cover all data whereas the DPP only requires = 82. By = 40, we cover over
90% of the data on average. Capturing the rare poses is extremely challenging with i.i.d. sampling,
but the diversity encouraged by the DPP overcomes this issue.
8
Conclusion
Motivated by the recent successes of DPP-based subset modeling in finite-set applications and the
growing interest in repulsive processes on continuous spaces, we considered methods by which
continuous-DPP sampling can be straightforwardly and efficiently approximated for a wide range of
kernels. Our low-rank approach harnessed approximations provided by Nystr?om and random Fourier
feature methods and then utilized a continuous dual DPP representation. The resulting approximate
sampler garners the same efficiencies that led to the success of the DPP in the discrete case. One can
use this method as a proposal distribution and correct for the approximations via Metropolis-Hastings,
for example. For k-DPPs, we devised an exact Gibbs sampler that utilized the Schur complement
representation. Finally, we demonstrated that continuous-DPP sampling is useful both for repulsive
mixture modeling (which utilizes the Gibbs sampling scheme) and in synthesizing diverse human
poses (which we demonstrated with the low-rank approximation method). As we saw in the MoCap
example, we can handle high-dimensional spaces d, with our computations scaling just linearly with
d. We believe this work opens up opportunities to use DPPs as parts of many models.
Acknowledgements: RHA and EBF were supported in part by AFOSR Grant FA9550-12-1-0453
and DARPA Grant FA9550-12-1-0406 negotiated by AFOSR. BT was partially supported by NSF CAREER Grant 1054215 and by STARnet, a Semiconductor Research Corporation program sponsored
by MARCO and DARPA.
8
References
[1] P. Abbeel and A.Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc.
ICML, 2004.
[2] R. H. Affandi, A. Kulesza, and E. B. Fox. Markov determinantal point processes. In Proc. UAI,
2012.
[3] R.H. Affandi, A. Kulesza, E.B. Fox, and B. Taskar. Nystr?om approximation for large-scale
determinantal processes. In Proc. AISTATS, 2013.
[4] R. A. Bernstein and M. Gobbel. Partitioning of space in communities of ants. Journal of Animal
Ecology, 48(3):931?942, 1979.
[5] A. Borodin and E.M. Rains. Eynard-Mehta theorem, Schur process, and their Pfaffian analogs.
Journal of statistical physics, 121(3):291?317, 2005.
[6] CMU.
Carnegie Mellon University graphics lab motion capture database.
http://mocap.cs.cmu.edu/, 2009.
[7] D.J. Daley and D. Vere-Jones. An introduction to the theory of point processes: Volume I:
Elementary theory and methods. Springer, 2003.
[8] G.E. Fasshauer and M.J. McCourt. Stable evaluation of Gaussian radial basis function interpolants. SIAM Journal on Scientific Computing, 34(2):737?762, 2012.
[9] J. Gillenwater, A. Kulesza, and B. Taskar. Discovering diverse and salient threads in document
collections. In Proc. EMNLP, 2012.
[10] J.B. Hough, M. Krishnapur, Y. Peres, and B. Vir?ag. Determinantal processes and independence.
Probability Surveys, 3:206?229, 2006.
[11] A. Kulesza and B. Taskar. Structured determinantal point processes. In Proc. NIPS, 2010.
[12] A. Kulesza and B. Taskar. k-DPPs: Fixed-size determinantal point processes. In ICML, 2011.
[13] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. Foundations
and Trends in Machine Learning, 5(2?3), 2012.
[14] F. Lavancier, J. M?ller, and E. Rubak. Statistical aspects of determinantal point processes. arXiv
preprint arXiv:1205.4818, 2012.
[15] O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied
Probability, pages 83?122, 1975.
[16] B. Mat?ern. Spatial variation. Springer-Verlag, 1986.
[17] T. Neeff, G. S. Biging, L. V. Dutra, C. C. Freitas, and J. R. Dos Santos. Markov point processes
for modeling of spatial forest patterns in Amazonia derived from interferometric height. Remote
Sensing of Environment, 97(4):484?494, 2005.
[18] F. Petralia, V. Rao, and D. Dunson. Repulsive mixtures. In NIPS, 2012.
[19] A. Rahimi and B. Recht. Random features for large-scale kernel machines. NIPS, 2007.
[20] S. Richardson and P. J. Green. On Bayesian analysis of mixtures with an unknown number of
components (with discussion). JRSS:B, 59(4):731?792, 1997.
[21] C.P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004.
?
[22] J Schur. Uber
potenzreihen, die im innern des einheitskreises beschr?ankt sind. Journal f?ur die
reine und angewandte Mathematik, 147:205?232, 1917.
[23] M. Stephens. Dealing with label switching in mixture models. JRSS:B, 62(4):795?809, 2000.
[24] C.A. Sugar and G.M. James. Finding the number of clusters in a dataset: An informationtheoretic approach. JASA, 98(463):750?763, 2003.
[25] L. A. Waller, A. S?arkk?a, V. Olsbo, M. Myllym?aki, I.G. Panoutsopoulou, W.R. Kennedy, and
G. Wendelschafer-Crabb. Second-order spatial analysis of epidermal nerve fibers. Statistics in
Medicine, 30(23):2827?2841, 2011.
[26] J. Wang. Consistent selection of the number of clusters via crossvalidation. Biometrika, 97(4):
893?904, 2010.
[27] C.K.I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. NIPS,
2000.
[28] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystr?om method vs random fourier
features: A theoretical and empirical comparison. NIPS, 2012.
[29] J. Zou and R.P. Adams. Priors for diversity in generative latent variable models. In NIPS, 2012.
9
| 4916 |@word determinant:1 middle:1 polynomial:1 seems:3 norm:2 nd:1 open:1 mehta:1 d2:1 decomposition:3 covariance:3 concise:1 thereby:1 nystr:20 recursively:2 configuration:2 denoting:2 document:1 reine:1 freitas:1 current:1 arkk:1 dx:5 must:2 bd:1 determinantal:11 vere:1 fn:1 numerical:1 tilted:1 plot:2 interpretable:1 update:3 sponsored:1 v:2 fasshauer:1 generative:3 selected:6 discovering:1 ebf:1 indicative:1 xk:7 isotropic:1 core:2 fa9550:2 provides:3 location:8 height:1 thinned:1 manner:1 apprenticeship:1 upenn:1 behavior:1 growing:3 decreasing:1 actual:1 considering:4 cardinality:4 becomes:2 provided:5 begin:1 xx:2 increasing:2 linearity:1 mass:2 what:1 santos:1 eigenvector:3 inflating:1 ag:1 corporation:1 finding:1 every:1 shed:1 exactly:1 biometrika:1 vir:1 control:2 partitioning:1 grant:3 before:1 positive:3 waller:1 semiconductor:1 switching:3 despite:1 approximately:3 might:1 burn:1 suggests:1 challenging:1 limited:3 range:7 statistically:2 implement:1 procedure:1 foundational:1 empirical:3 significantly:3 projection:6 matching:1 radial:1 suggest:2 get:2 onto:3 cannot:1 selection:4 operator:3 risk:1 map:1 demonstrated:3 straightforward:1 regardless:1 williams:1 truncating:1 emily:1 independently:2 survey:1 simplicity:2 spanned:5 orthonormal:1 handle:2 variation:1 target:1 imagine:1 exact:4 us:2 element:2 velocity:1 approximated:3 synthesize:2 updating:1 utilized:2 trend:1 database:2 bottom:2 taskar:6 role:1 preprint:1 coincidence:1 wang:1 acidity:5 capture:2 wj:2 remote:1 decrease:1 balanced:1 pd:2 environment:1 complexity:6 und:1 reward:3 sugar:1 dynamic:1 predictive:2 upon:1 efficiency:1 learner:2 basis:2 easily:1 sep:2 joint:1 darpa:2 represented:1 fiber:2 mccourt:1 stdev:2 derivation:1 separated:8 effective:1 describe:1 monte:1 neighborhood:2 birth:1 supplementary:1 valued:1 solve:1 tested:1 statistic:2 richardson:1 transform:2 advantage:1 eigenvalue:11 propose:7 product:2 zm:1 loop:1 mixing:1 poorly:2 achieve:1 interferometric:1 description:1 wjk:2 crossvalidation:1 rff:18 convergence:1 requirement:1 extending:3 cluster:9 uncollapsed:1 generating:2 adam:1 ben:1 illustrate:2 derive:1 stat:1 pose:13 minor:1 progress:1 eq:6 strong:1 coverage:6 c:2 involves:2 indicate:2 judge:1 raja:1 correct:1 stochastic:1 centered:1 human:5 enable:2 material:1 implementing:1 fix:1 abbeel:1 elementary:1 im:1 pl:5 wjn:2 hold:1 marco:1 considered:5 exp:4 visually:1 overlaid:1 algorithmic:1 mapping:1 dxl:4 estimation:6 proc:5 label:5 saw:1 tool:2 minimization:1 gaussian:10 always:1 aim:1 modified:1 rather:1 occupied:1 pn:1 zhou:1 varying:2 derived:1 focus:2 vk:5 rank:17 indicates:1 likelihood:6 seeger:1 sense:2 inference:2 dependent:2 repulsion:7 membership:4 bt:1 favoring:1 reproduce:1 issue:3 dual:10 overall:2 classification:4 animal:1 spatial:5 fairly:3 wharton:1 once:2 having:1 washington:2 sampling:49 encouraged:1 identical:1 ng:1 broad:1 jones:1 icml:2 nearly:2 thin:1 hv1:1 overestimating:1 hint:1 few:1 employ:1 quantitatively:1 randomly:2 individual:1 phase:16 translationally:1 ecology:1 interest:3 acceptance:2 investigate:1 possibility:1 evaluation:2 generically:1 mixture:24 introduces:1 semidefinite:1 light:1 behind:1 held:5 chain:1 integral:2 fox:2 tree:1 hough:2 penalizes:1 theoretical:3 modeling:7 rao:1 cover:4 rffs:1 zn:1 assignment:1 tractability:1 subset:3 entry:2 rare:1 uniform:2 graphic:1 straightforwardly:1 dependency:1 varies:1 perturbed:2 synthetic:6 recht:1 density:23 siam:1 probabilistic:1 physic:1 synthesis:1 again:2 slowly:1 emnlp:1 collapsing:1 worse:2 expert:2 inefficient:2 li:1 mahdavi:1 diversity:3 de:1 sec:12 wk:1 includes:1 summarized:1 explicitly:1 depends:3 performed:2 view:1 try:1 lab:1 doing:1 red:1 ass:5 om:20 accuracy:5 variance:3 characteristic:3 efficiently:4 yield:2 ant:2 bayesian:2 garner:1 iid:14 carlo:1 trajectory:5 kennedy:1 j6:6 history:1 casella:1 frequency:2 involved:1 galaxy:5 james:1 nystrom:3 naturally:2 associated:3 hamming:1 sampled:7 dataset:5 knowledge:2 hilbert:1 starnet:1 nerve:2 focusing:1 higher:1 though:1 shrink:1 furthermore:2 just:1 correlation:1 hand:1 hastings:1 overlapping:1 assessment:1 quality:9 scientific:1 believe:2 building:1 eigendecomposing:1 true:4 unbiased:1 equality:1 analytically:1 iteratively:1 nonzero:1 death:1 attractive:1 indistinguishable:1 ll:1 aki:1 covering:1 die:2 iris:5 prominent:1 hill:1 theoretic:1 demonstrate:1 performs:3 motion:3 variational:6 wise:1 recently:1 common:4 garnered:1 harnessed:1 volume:2 extend:3 analog:2 translationinvariant:1 relating:1 interpretation:1 synthesized:1 measurement:4 significant:2 mellon:1 gibbs:14 dpps:22 rd:5 similarly:1 inclusion:2 gillenwater:1 stable:1 similarity:12 surface:1 v0:4 rha:1 base:1 enzyme:3 posterior:8 own:1 showed:1 multivariate:1 recent:1 moderate:1 discard:1 certain:1 verlag:1 success:2 devise:1 integrable:1 captured:1 mocap:5 redundant:1 ller:1 cv2:1 ii:5 dashed:1 full:6 mix:2 stephen:1 rahimi:1 match:1 cross:1 long:1 devised:1 post:1 manipulate:1 y:3 laplacian:1 basic:1 metric:4 poisson:1 mog:2 cmu:3 iteration:2 kernel:65 normalization:1 histogram:1 arxiv:2 proposal:3 whereas:2 conditionals:1 addition:1 signify:1 grow:2 concluded:1 biased:1 neuropathy:1 eigenfunctions:4 tend:2 schur:6 yang:1 bernstein:1 iii:1 xj:6 independence:1 pennsylvania:1 restrict:1 bandwidth:1 hindered:1 inner:1 idea:1 reduce:1 computable:1 det:1 shift:1 whether:1 six:2 handled:2 motivated:1 utility:1 thread:1 eigengap:1 enzymatic:1 useful:2 generally:2 detailed:1 eigenvectors:4 processed:1 reduced:2 http:1 zj:3 nsf:1 estimated:1 overly:1 diverse:8 zd:1 discrete:15 hyperparameter:1 carnegie:1 mat:1 dancing:1 key:4 four:1 salient:1 blood:1 d3:1 rectangle:1 v1:2 asymptotically:1 fraction:1 run:3 inverse:5 angle:1 extends:1 place:2 reasonable:1 vn:4 utilizes:2 lake:1 draw:2 separation:2 interpolants:1 scaling:1 capturing:1 bound:2 guaranteed:1 display:1 fold:1 activity:7 strength:2 placement:1 sharply:1 dominated:1 fourier:9 aspect:1 speed:1 extremely:2 performing:1 ern:1 structured:1 developing:1 according:1 combination:2 poor:1 conjugate:2 jr:2 across:1 slightly:1 ur:1 appealing:2 metropolis:1 making:2 intuitively:1 invariant:6 explained:1 restricted:1 handling:1 macchi:1 taken:1 computationally:3 equation:1 visualization:1 previously:1 mathematik:1 needed:1 know:1 letting:1 tractable:3 repulsive:13 decomposing:1 gaussians:5 apply:4 v2:1 spectral:3 generic:1 slower:1 original:2 denotes:3 assumes:1 clustering:4 dirichlet:1 ensure:1 top:2 opportunity:1 maintaining:2 rain:1 pfaffian:1 medicine:1 exploit:1 k1:1 build:2 especially:3 approximating:5 ebfox:1 move:1 dependence:2 exhibit:1 subspace:5 distance:10 landmark:3 cauchy:1 considers:1 spanning:1 assuming:2 length:1 providing:1 difficult:2 executed:1 dunson:1 robert:1 potentially:1 favorably:1 debate:1 synthesizing:2 policy:1 unknown:2 perform:1 upper:1 observation:4 markov:3 datasets:2 finite:6 enabling:1 jin:1 defining:2 peres:1 excluding:1 looking:1 frame:2 orthonormalize:1 perturbation:1 community:2 complement:3 required:3 z1:1 epidermal:1 inaccuracy:1 nip:6 address:2 able:2 pattern:1 borodin:1 kulesza:6 challenge:2 program:1 rf:3 including:1 interpretability:1 green:1 critical:1 wmk:1 difficulty:1 suitable:1 cvk:1 rely:1 recursion:1 representing:1 scheme:12 lavancier:1 review:1 nice:1 prior:10 acknowledgement:1 relative:3 wisconsin:1 afosr:2 fully:1 expect:1 heldout:2 rationale:1 loss:1 proven:1 validation:1 eigendecomposition:10 foundation:1 jasa:1 consistent:2 proxy:1 share:1 cd:1 translation:3 summary:2 penalized:1 supported:2 transpose:1 infeasible:2 jth:1 formal:1 allow:1 wide:6 neighbor:1 face:1 affandi:2 sparse:1 crabb:1 dpp:70 dimension:9 cumulative:1 author:1 made:1 reinforcement:2 collection:1 simplified:1 kpl:1 bb:1 approximate:6 compact:3 frf:2 preferred:1 implicitly:1 informationtheoretic:1 overcomes:1 dealing:1 sequentially:1 uai:1 krishnapur:1 b1:1 assumed:1 xi:9 continuous:28 latent:2 table:5 zk:3 career:1 angewandte:1 forest:1 alg:1 permute:1 complex:5 necessarily:1 cl:2 zou:1 vj:2 diag:4 aistats:1 spread:2 linearly:4 arise:1 edition:1 myllym:1 x1:1 fig:6 daley:1 xl:9 theorem:1 sensing:1 dk:1 admits:1 decay:1 cjk:2 intractable:1 supplement:11 conditioned:1 occurring:1 sparser:1 rejection:3 suited:1 entropy:9 bvk:1 led:1 simply:1 likely:1 explore:1 rubak:1 partially:1 springer:3 mij:3 relies:2 dispersed:2 cdf:6 conditional:7 fox2:1 absence:2 feasible:1 hard:2 change:1 specifically:1 except:1 typical:2 infinite:1 sampler:12 total:3 specie:2 tendency:1 la:2 diverging:1 petralia:1 uber:1 exception:1 formally:1 select:3 highdimensional:1 evaluate:1 dance:2 phenomenon:4 correlated:1 |
4,328 | 4,917 | Actor-Critic Algorithms for Risk-Sensitive MDPs
Prashanth L.A.
INRIA Lille - Team SequeL
Mohammad Ghavamzadeh?
INRIA Lille - Team SequeL & Adobe Research
Abstract
In many sequential decision-making problems we may want to manage risk by
minimizing some measure of variability in rewards in addition to maximizing a
standard criterion. Variance-related risk measures are among the most common
risk-sensitive criteria in finance and operations research. However, optimizing
many such criteria is known to be a hard problem. In this paper, we consider both
discounted and average reward Markov decision processes. For each formulation,
we first define a measure of variability for a policy, which in turn gives us a set of
risk-sensitive criteria to optimize. For each of these criteria, we derive a formula
for computing its gradient. We then devise actor-critic algorithms for estimating
the gradient and updating the policy parameters in the ascent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies.
Finally, we demonstrate the usefulness of our algorithms in a traffic signal control
application.
1
Introduction
The usual optimization criteria for an infinite horizon Markov decision process (MDP) are the expected sum of discounted rewards and the average reward. Many algorithms have been developed to
maximize these criteria both when the model of the system is known (planning) and unknown (learning). These algorithms can be categorized to value function based methods that are mainly based on
the two celebrated dynamic programming algorithms value iteration and policy iteration; and policy
gradient methods that are based on updating the policy parameters in the direction of the gradient
of a performance measure (the value function of the initial state or the average reward). However in
many applications, we may prefer to minimize some measure of risk as well as maximizing a usual
optimization criterion. In such cases, we would like to use a criterion that incorporates a penalty
for the variability induced by a given policy. This variability can be due to two types of uncertainties: 1) uncertainties in the model parameters, which is the topic of robust MDPs (e.g., [12, 7, 24]),
and 2) the inherent uncertainty related to the stochastic nature of the system, which is the topic of
risk-sensitive MDPs (e.g., [10]).
In risk-sensitive sequential decision-making, the objective is to maximize a risk-sensitive criterion
such as the expected exponential utility [10], a variance-related measure [19, 8], or the percentile
performance [9]. The issue of how to construct such criteria in a manner that will be both conceptually meaningful and mathematically tractable is still an open question. Although risk-sensitive
sequential decision-making has a long history in operations research and finance, it has only recently
grabbed attention in the machine learning community. This is why most of the work on this topic
(including those mentioned above) has been in the context of MDPs (when the model is known) and
much less work has been done within the reinforcement learning (RL) framework. In risk-sensitive
RL, we can mention the work by Borkar [4, 5] who considered the expected exponential utility and
the one by Tamar et al. [22] on several variance-related measures. Tamar et al. [22] study stochastic shortest path problems, and in this context, propose a policy gradient algorithm for maximizing
several risk-sensitive criteria that involve both the expectation and variance of the return random
variable (defined as the sum of rewards received in an episode).
?
Mohammad Ghavamzadeh is at Adobe Research, on leave of absence from INRIA Lille - Team SequeL.
1
In this paper, we develop actor-critic algorithms for optimizing variance-related risk measures in
both discounted and average reward MDPs. Our contributions can be summarized as follows:
? In the discounted reward setting we define the measure of variability as the variance of the return
(similar to [22]). We formulate a constrained optimization problem with the aim of maximizing the
mean of the return subject to its variance being bounded from above. We employ the Lagrangian
relaxation procedure [1] and derive a formula for the gradient of the Lagrangian. Since this requires the gradient of the value function at every state of the MDP (see the discussion in Sections 3
and 4), we estimate the gradient of the Lagrangian using two simultaneous perturbation methods: simultaneous perturbation stochastic approximation (SPSA) [20] and smoothed functional (SF) [11],
resulting in two separate discounted reward actor-critic algorithms.1
? In the average reward formulation, we first define the measure of variability as the long-run variance of a policy, and using a constrained optimization problem similar to the discounted case, derive
an expression for the gradient of the Lagrangian. We then develop an actor-critic algorithm with
compatible features [21, 13] to estimate the gradient and to optimize the policy parameters.
? Using the ordinary differential equations (ODE) approach, we establish the asymptotic convergence of our algorithms to locally risk-sensitive optimal policies. Further, we demonstrate the usefulness of our algorithms in a traffic signal control problem.
In comparison to [22], which is the closest related work, we would like to remark that while the authors there develop policy gradient methods for stochastic shortest path problems, we devise actorcritic algorithms for both discounted and average reward settings. Moreover, we note the difficulty
in the discounted formulation that requires to estimate the gradient of the value function at every
state of the MDP, and thus, motivated us to employ simultaneous perturbation techniques.
2
Preliminaries
We consider problems in which the agent?s interaction with the environment is modeled as a
MDP. A MDP is a tuple (X , A, R, P, P0 ) where X = {1, . . . , n} and A = {1, . . . , m} are the
state and action
spaces;
R(x, a) is the reward random variable whose expectation is denoted by
r(x, a) = E R(x, a) ; P (?|x, a) is the transition probability distribution; and P0 (?) is the initial
state distribution. We also need to specify the rule according to which the agent selects actions
at each state. A stationary policy ?(?|x) is a probability distribution over actions, conditioned on
the current state. In
policy gradient and actor-critic methods, we define a class of parameterized
stochastic policies ?(?|x; ?), x ? X , ? ? ? ? R?1 , estimate the gradient of a performance measure w.r.t. the policy parameters ? from the observed system trajectories, and then improve the policy
by adjusting its parameters in the direction of the gradient. Since in this setting a policy ? is represented by its ?1 -dimensional parameter vector ?, policy dependent functions can be written as a
function of ? in place of ?. So, we use ? and ? interchangeably in the paper.
We denote by d? (x) and ? ? (x, a) = d? (x)?(a|x) the stationary distribution of state x and stateaction pair (x, a) under policy ?, respectively. In the discounted formulation, we also define the
discounted visiting distribution of state x and state-action pair (x, a) under policy ? as d?? (x|x0 ) =
P?
(1 ? ?) t=0 ? t Pr(xt = x|x0 = x0 ; ?) and ??? (x, a|x0 ) = d?? (x|x0 )?(a|x).
3
Discounted Reward Setting
For a given policy ?, we define the return of a state x (state-action pair (x, a)) as the sum of discounted rewards encountered by the agent when it starts at state x (state-action pair (x, a)) and then
follows policy ?, i.e.,
D? (x) =
?
X
? t R(xt , at ) | x0 = x, ?,
D? (x, a) =
t=0
?
X
? t R(xt , at ) | x0 = x, a0 = a, ?.
t=0
The expected value of
these two random variablesare the value
and action-value functions of policy
?, i.e., V ? (x) = E D? (x) and Q? (x, a) = E D? (x, a) . The goal in the standard discounted
reward formulation is to find an optimal policy ?? = arg max? V ? (x0 ), where x0 is the initial state
of the system. This
extended to the case that the system has more than one initial state
Pcan be easily
?? = arg max?
P0 (x)V ? (x).
x?X
1
We note here that our algorithms can be easily extended to other variance-related risk criteria such as the
Sharpe ratio, which is popular in financial decision-making [18] (see Appendix D of [17]).
2
The most common measure of the variability in the stream of rewards is the variance of the return
?? (x) = E D? (x)2 ? V ? (x)2 = U ? (x) ? V ? (x)2 ,
(1)
4
first introduced by Sobel [19]. Note that U ? (x) = E D? (x)2 is the square reward value function
of state x under policy ?. Although ?? of (1) satisfies a Bellman equation, unfortunately, it lacks
the monotonicity property of dynamic programming (DP), and thus, it is not clear how the related
risk measures can be optimized by standard DP algorithms [19]. This is why policy gradient and
actor-critic algorithms are good candidates to deal with this risk measure. We consider the following
risk-sensitive measure for discounted MDPs: for a given ? > 0,
max V ? (x0 )
subject to
?
?? (x0 ) ? ?.
(2)
To solve (2), we employ the Lagrangian relaxation procedure [1] to convert it to the following
unconstrained problem:
4
max min L(?, ?) = ?V ? (x0 ) + ? ?? (x0 ) ? ? ,
?
?
(3)
where ? is the Lagrange multiplier. The goal here is to find the saddle point of L(?, ?), i.e., a
point (?? , ?? ) that satisfies L(?, ?? ) ? L(?? , ?? ) ? L(?? , ?), ??, ?? > 0. This is achieved by descending in ? and ascending in ? using the gradients ?? L(?, ?) = ??? V ? (x0 ) + ??? ?? (x0 ) and
?? L(?, ?) = ?? (x0 ) ? ?, respectively. Since ??? (x0 ) = ?U ? (x0 ) ? 2V ? (x0 )?V ? (x0 ), in order
to compute ??? (x0 ), we need to calculate ?U ? (x0 ) and ?V ? (x0 ). From the Bellman equation of
?? (x), proposed by Sobel [19], it is straightforward to derive Bellman equations for U ? (x) and the
4
square reward action-value function W ? (x, a) = E D? (x, a)2 (see Appendix B.1 of [17]). Using
these definitions and notations we are now ready to derive expressions for the gradient of V ? (x0 )
and U ? (x0 ) that are the main ingredients in calculating ?? L(?, ?).
Lemma 1 Assuming for all (x, a), ?(a|x; ?) is continuously differentiable in ?, we have
(1 ? ?)?V ? (x0 ) =
X
??? (x, a|x0 )? log ?(a|x; ?)Q? (x, a),
x,a
2
?
0
(1 ? ? )?U (x ) =
X
?
e?? (x, a|x0 )? log ?(a|x; ?)W ? (x, a)
x,a
+ 2?
X
?
e?? (x, a|x0 )P (x0 |x, a)r(x, a)?V ? (x0 ),
x,a,x0
P?
where ?
e?? (x, a|x0 ) = de?? (x|x0 )?(a|x) and de?? (x|x0 ) = (1 ? ? 2 ) t=0 ? 2t Pr(xt = x|x0 = x0 ; ?).
The proof of the above lemma is available in Appendix B.2 of [17]. It is challenging to devise an
efficient method to estimate ?? L(?, ?) using the gradient formulas of Lemma 1. This is mainly
because 1) two different sampling distributions (??? and ?
e?? ) are used for ?V ? (x0 ) and ?U ? (x0 ),
? 0
? 0
and 2) ?V (x ) appears in the second sum of ?U (x ) equation, which implies that we need
to estimate the gradient of the value function V ? at every state of the MDP. These are the main
motivations behind using simultaneous perturbation methods for estimating ?? L(?, ?) in Section 4.
4
Discounted Reward Algorithms
In this section, we present actor-critic algorithms for optimizing the risk-sensitive measure (2) that
are based on two simultaneous perturbation methods: simultaneous perturbation stochastic approximation (SPSA) and smoothed functional (SF) [3]. The idea in these methods is to estimate the
gradients ?V ? (x0 ) and ?U ? (x0 ) using two simulated trajectories of the system corresponding to
policies with parameters ? and ?+ = ? + ??. Here ? > 0 is a positive constant and ? is a perturbation random variable, i.e., a ?1 -vector of independent Rademacher (for SPSA) and Gaussian N (0, 1)
(for SF) random variables. In our actor-critic algorithms, the critic uses linear approximation for the
b (x) ? u> ?u (x), where the features
value and square value functions, i.e., Vb (x) ? v > ?v (x) and U
?v (?) and ?u (?) are from low-dimensional spaces R?2 and R?3 , respectively.
SPSA-based gradient estimates were first proposed in [20] and have been widely studied and found
to be highly efficient in various settings, especially those involving high-dimensional parameters.
The SPSA-based estimate for ?V ? (x0 ), and similarly for ?U ? (x0 ), is given by:
3
??t
+
?t
+
+
a+
t ? ?(?|xt ; ?t )
+
+
?t+ , ?+
t , v t , ut
Update
?t
using (8)
rt
at ? ?(?|xt ; ?t )
Actor
Critic
rt+
?t+1
or (9)
? t , ? t , v t , ut
Critic
Figure 1: The overall flow of our simultaneous perturbation based actor-critic algorithms.
??(i) Vb ? (x0 )
?
Vb ?+?? (x0 ) ? Vb ? (x0 )
,
??(i)
i = 1, . . . , ?1 ,
(4)
where ? is a vector of independent Rademacher random variables. The advantage of this estimator
is that it perturbs all directions at the same time (the numerator is identical in all ?1 components).
So, the number of function measurements needed for this estimator is always two, independent of
the dimension ?1 . However, unlike the SPSA estimates in [20] that use two-sided balanced estimates
(simulations with parameters ? ??? and ? +??), our gradient estimates are one-sided (simulations
with parameters ? and ?+??) and resemble those in [6]. The use of one-sided estimates is primarily
because the updates of the Lagrangian parameter ? require a simulation with the running parameter
?. Using a balanced gradient estimate would therefore come at the cost of an additional simulation
(the resulting procedure would then require three simulations), which we avoid by using one-sided
gradient estimates.
SF-based method estimates not the gradient of a function H(?) itself, but rather the convolution of
?H(?) with the Gaussian density function N (0, ? 2 I), i.e.,
Z
C? H(?) =
Z
G? (? ? z)?z H(z)dz =
?z G? (z)H(? ? z)dz =
1
?
Z
?z 0 G1 (z 0 )H(? ? ?z 0 )dz 0 ,
where G? is a ?1 -dimensional probability density function. The first equality above follows by
using integration by parts and the second one by using the fact that ?z G? (z) = ?z
? 2 G? (z) and by
substituting z 0 = z/?. As ? ? 0, it can be seen that C? H(?) converges to ?? H(?) (see Chapter 6
of [3]). Thus, a one-sided SF estimate of ?V ? (x0 ) is given by
??(i) Vb ? (x0 )
?
?(i) b ?+?? 0
V
(x ) ? Vb ? (x0 ) ,
?
i = 1, . . . , ?1 ,
(5)
where ? is a vector of independent Gaussian N (0, 1) random variables.
The overall flow of our proposed actor-critic algorithms is illustrated in Figure 1 and involves the
following main steps at each time step t:
(1) Take action at ? ?(?|xt ; ?t ), observe the reward r(xt , at ) and next state xt+1 in the first trajectory.
+
+
+
+
+
(2) Take action a+
t ? ?(?|xt ; ?t ), observe the reward r(xt , at ) and next state xt+1 in the second
trajectory.
(3) Critic Update: Calculate the temporal difference (TD)-errors ?t , ?t+ for the value and t , +
t for
the square value functions using (7), and update the critic parameters vt , vt+ for the value and ut , u+
t
for the square value functions as follows:
vt+1 = vt + ?3 (t)?t ?v (xt ),
+
vt+1
= vt+ + ?3 (t)?t+ ?v (x+
t ),
ut+1 = ut + ?3 (t)t ?u (xt ),
+
+
+
u+
t+1 = ut + ?3 (t)t ?u (xt ),
(6)
where the TD-errors ?t , ?t+ , t , +
t in (6) are computed as
?t = r(xt , at ) + ?vt> ?v (xt+1 ) ? vt> ?v (xt ),
+
+>
+>
?t+ = r(x+
?v (x+
?v (x+
t , at ) + ?vt
t ),
t+1 ) ? vt
>
t = r(xt , at )2 + 2?r(xt , at )vt> ?v (xt+1 ) + ? 2 u>
t ?u (xt+1 ) ? ut ?u (xt ),
+
+ 2
+
+ +>
2 +>
+>
+
?v (x+
?u (x+
?u (x+
t = r(xt , at ) + 2?r(xt , at )vt
t ).
t+1 ) + ? ut
t+1 ) ? ut
4
(7)
This TD algorithm to learn the value and square value functions is a straightforward extension of the
algorithm proposed in [23] to the discounted setting. Note that the TD-error for the square value
function U comes directly from the Bellman equation for U (see Appendix B.1 of [17]).
(4) Actor Update: Estimate the gradients ?V ? (x0 ) and ?U ? (x0 ) using SPSA (4) or SF (5) and
update the policy parameter ? and the Lagrange multiplier ? as follows: For i = 1, . . . , ?1 ,
(i)
?t+1
=
(i)
? i ?t
+
?2 (t)
(i)
1+
??t
2?t vt> ?v (x0 ) (vt+
>
0
? vt ) ?v (x ) ?
?t (u+
t
? ut ) ?u (x ) , SPSA (8)
>
0
(i)
?2 (t)?t
(i)
(i)
>
0
?t+1 = ?i ?t +
, SF (9)
1 + 2?t vt> ?v (x0 ) (vt+ ? vt )> ?v (x0 ) ? ?t (u+
t ? ut ) ?u (x )
?
0
>
0 2
(10)
?t+1 = ?? ?t + ?1 (t) u>
?? .
t ?u (x ) ? vt ?v (x )
(i)
Note that 1) the ?-update is the same for both SPSA and SF methods, 2) ?t ?s are independent
Rademacher and Gaussian N (0, 1) random variables in SPSA and SF updates, respectively, 3) ?
is an operator that projects a vector ? ? R?1 to the closest point in a compact and convex set
C ? R?1 , and ?? is a projection operator to [0, ?max ]. These projection operators are necessary to
ensure convergence of the algorithms, and 4) the step-size schedules {?3 (t)}, {?2 (t)}, and {?1 (t)}
are chosen such that the critic updates are on the fastest time-scale, the policy parameter update
is on the intermediate time-scale, and the Lagrange multiplier update is on the slowest time-scale
(see Appendix A of [17] for the conditions on the step-size schedules). A proof of convergence
of the SPSA and SF algorithms to a (local) saddle point of the risk-sensitive objective function
4
b ?) =
b ? (x0 ) ? ?) is given in Appendix B.3 of [17].
L(?,
?Vb ? (x0 ) + ?(?
5
Average Reward Setting
The average reward per step under policy ? is defined as (see Sec. 2 for the definitions of d? and ? ? )
"T ?1
#
X
X ?
1
?(?) = lim
E
Rt | ? =
d (x)?(a|x)r(x, a).
T ?? T
t=0
x,a
The goal in the standard (risk-neutral) average reward formulation is to find an average optimal
policy, i.e., ?? = arg max? ?(?). Here a policy ? is assessed according to the expected differential
reward associated with states or state-action pairs. For all states x ? X and actions a ? A, the
differential action-value and value functions of policy ? are defined as
Q? (x, a) =
?
X
E Rt ? ?(?) | x0 = x, a0 = a, ? ,
V ? (x) =
t=0
X
?(a|x)Q? (x, a).
a
In the context of risk-sensitive MDPs, different criteria have been proposed to define a measure of
variability, among which we consider the long-run variance of ? [8] defined as
?(?) =
X
2
? (x, a) r(x, a) ? ?(?)
=
?
x,a
"T ?1
#
X
2
1
E
Rt ? ?(?) | ? .
lim
T ?? T
t=0
(11)
This notion of variability is based on the observation that it is the frequency of occurrence of stateaction pairs that determine the variability in the average reward. It is easy to show that
?(?) = ?(?) ? ?(?)2 ,
where ?(?) =
X
? ? (x, a)r(x, a)2 .
x,a
We consider the following risk-sensitive measure for average reward MDPs in this paper:
max ?(?)
?
subject to
?(?) ? ?,
(12)
for a given ? > 0. As in the discounted setting, we employ the Lagrangian relaxation procedure to
convert (12) to the unconstrained problem
4
max min L(?, ?) = ??(?) + ? ?(?) ? ? .
?
?
Similar to the discounted case, we descend in ? using ?? L(?, ?) = ??? ?(?) + ??? ?(?) and ascend
in ? using ?? L(?, ?) = ?(?) ? ?, to find the saddle point of L(?, ?). Since ??(?) = ??(?) ?
5
2?(?)??(?), in order to compute ??(?) it would be enough to calculate ??(?). Let U ? and W ?
denote the differential value and action-value functions associated with the square reward under
policy ?, respectively. These two quantities satisfy the following Poisson equations:
?(?) + U ? (x) =
X
X
?(a|x) r(x, a)2 +
P (x0 |x, a)U ? (x0 ) ,
x0
a
?
2
?(?) + W (x, a) = r(x, a) +
X
0
P (x |x, a)U ? (x0 ).
(13)
x0
We calculate the gradients of ?(?) and ?(?) as (see Lemma 5 of Appendix C.1 in [17]):
??(?) =
X
?(x, a; ?)? log ?(a|x; ?)Q(x, a; ?),
(14)
?(x, a; ?)? log ?(a|x; ?)W (x, a; ?).
(15)
x,a
??(?) =
X
x,a
Note that (15) for calculating ??(?) has close resemblance to (14) for ??(?), and thus, similar
to what we have for (14), any function b : X ? R can be added or subtracted to W (x, a; ?)
on the RHS of (15) without changing the result of the integral (see e.g., [2]). So, we can replace
W (x, a; ?) with the square reward advantage function B(x, a; ?) = W (x, a; ?)?U (x; ?) on the RHS
of (15) in the same manner as we can replace Q(x, a; ?) with the advantage function A(x, a; ?) =
Q(x, a; ?) ? V (x; ?) on the RHS of (14) without changing the result of the integral. We define the
temporal difference (TD) errors ?t and t for the differential value and square value functions as
b (xt+1 ) ? U
b (xt ).
t = R(xt , at )2 ? ?bt+1 + U
?t = R(xt , at ) ? ?bt+1 + Vb (xt+1 ) ? Vb (xt ),
b , ?b, and ?b are unbiased estimators of V ? , U ? , ?(?), and ?(?), respectively, then we can show
If Vb , U
that ?t and t are unbiased estimates of the advantage functions A? and B ? , i.e., E[ ?t | xt , at , ?] =
A? (xt , at ), and E[ t | xt , at , ?] = B ? (xt , at ) (see Lemma 6 in Appendix C.2 of [17]). From this,
we notice that ?t ?t and t ?t are unbiased estimates of ??(?) and ??(?), respectively, where ?t =
?(xt , at ) = ? log ?(at |xt ) is the compatible feature (see e.g., [21, 13]).
6
Average Reward Algorithm
We now present our risk-sensitive actor-critic algorithm for average reward MDPs. Algorithm 1
presents the complete structure of the algorithm along with update rules for the average rewards
?bt , ?bt ; TD errors ?t , t ; critic vt , ut ; and actor ?t , ?t parameters. The projection operators ? and ??
are as defined in Section 4, and similar to the discounted setting, are necessary for the convergence
proof of the algorithm. The step-size schedules satisfy the standard conditions for stochastic approximation algorithms, and ensure that the average and critic updates are on the (same) fastest time-scale
{?4 (t)} and {?3 (t)}, the policy parameter update is on the intermediate time-scale {?2 (t)}, and the
Lagrange multiplier is on the slowest time-scale {?1 (t)}. This results in a three time-scale stochastic
approximation algorithm. As in the discounted setting, the critic uses linear approximation for the
b (x) = u> ?u (x), where
differential value and square value functions, i.e., Vb (x) = v > ?v (x) and U
?v (?) and ?u (?) are feature vectors of size ?2 and ?3 , respectively. Although our estimates of ?(?)
and ?(?) are unbiased, since we use biased estimates for V ? and U ? (linear approximations in the
critic), our gradient estimates ??(?) and ??(?), and as a result ?L(?, ?), are biased. Lemma 7 in
Appendix C.2 of [17] shows the bias in our estimate of ?L(?, ?). We prove that our actor-critic
algorithm converges to a (local) saddle point of the risk-sensitive objective function L(?, ?) (see
Appendix C.3 of [17]).
7
Experimental Results
We evaluate our algorithms in the context of a traffic signal control application. The objective in our
formulation is to minimize the total number of vehicles in the system, which indirectly minimizes
the delay experienced by the system. The motivation behind using a risk-sensitive control strategy
is to reduce the variations in the delay experienced by road users.
We consider both infinite horizon discounted as well average settings for the traffic signal
control MDP, formulated as in [15]. We briefly recall their formulation here: The state at
each time t, xt , is the vector of queue lengths and elapsed times and is given by xt =
6
Algorithm 1 Template of the Average Reward Risk-Sensitive Actor-Critic Algorithm
Input: parameterized policy ?(?|?; ?) and value function feature vectors ?v (?) and ?u (?)
Initialization: policy parameters ? = ?0 ; value function weight vectors v = v0 and u = u0 ; initial state
x0 ? P0 (x)
for t = 0, 1, 2, . . . do
Draw action at ? ?(?|xt ; ?t )
Observe next state xt+1 ? P (?|xt , at )
Observe reward R(xt , at )
Average Updates: ?bt+1 = 1 ? ?4 (t) ?bt + ?4 (t)R(xt , at ), ?bt+1 = 1 ? ?4 (t) ?bt + ?4 (t)R(xt , at )2
TD Errors: ?t = R(xt , at ) ? ?bt+1 + vt> ?v (xt+1 ) ? vt> ?v (xt )
>
t = R(xt , at )2 ? ?bt+1 + u>
t ?u (xt+1 ) ? ut ?u (xt )
Critic Updates: vt+1 = vt + ?3 (t)?t ?v (xt ),
ut+1 = ut + ?3 (t)t ?u (xt )
Actor Updates: ?t+1 = ? ?t ? ?2 (t) ? ?t ?t + ?t (t ?t ? 2b
?t+1 ?t ?t )
?t+1 = ?? ?t + ?1 (t)(b
?t+1 ? ?b2t+1 ? ?)
(16)
(17)
(18)
end for
return policy and value function parameters ?, ?, v, u
q1 (t), . . . , qN (t), t1 (t), . . . , tN (t) . Here qi and ti denote the queue length and elapsed time since
the signal turned to red on lane i. The actions at belong to the set of feasible sign configurations.
The single-stage cost function h(xt ) is defined as follows:
h(xt ) = r1
hX
r2 ? qi (t) +
i?Ip
X
i
hX
i
X
s2 ? qi (t) + s1
r2 ? ti (t) +
s2 ? ti (t) ,
i?Ip
i?I
/ p
(19)
i?I
/ p
where ri , si ? 0 such that ri + si = 1 for i = 1, 2 and r2 > s2 . The set Ip is the set of prioritized
lanes in the road network considered. While the weights r1 , s1 are used to differentiate between the
queue length and elapsed time factors, the weights r2 , s2 help in prioritization of traffic.
Given the above traffic control setting, we aim to minimize both the long run discounted as well
average sum of the cost function h(xt ). The underlying policy for all the algorithms is a parameterized Boltzmann policy (see Appendix F of [17]). We implement the following algorithms in the
discounted setting:
(i) Risk-neutral SPSA and SF algorithms with the actor update as follows:
(i)
?t+1
!
?2 (t)
= ?i
+
= ?i
(i)
?t
?2 (t)?t
+
(vt+ ? vt )> ?v (x0 )
?
(vt+
(i)
??t
(i)
(i)
?t+1
>
(i)
?t
0
? vt ) ?v (x )
SPSA,
!
SF,
where the critic parameters vt+ , vt are updated according to (6). Note that these are two-timescale
algorithms with a TD critic on the faster timescale and the actor on the slower timescale.
(ii) Risk-sensitive SPSA and SF algorithms (RS-SPSA and RS-SF) of Section 4 that attempt to
solve (2) and update the policy parameter according to (8) and (9), respectively. In the average
setting, we implement (i) the risk-neutral AC algorithm from [14] that incorporates an actor-critic
scheme, and (ii) the risk-sensitive algorithm of Section 6 (RS-AC) that attempts to solve (12) and
updates the policy parameter according to (17).
All our algorithms incorporate function approximation owing to the curse of dimensionality associated with larger road networks. For instance, assuming only 20 vehicles per lane of a 2x2-grid
network, the cardinality of the state space is approximately of the order 1032 and the situation is
aggravated as the size of the road network increases. The choice of features used in each of our algorithms is as described in Section V-B of [16]. We perform the experiments on a 2x2-grid network.
The list of parameters and step-sizes chosen for our algorithms is given in Appendix F of [17].
Figures 2(a) and 2(b) show the distribution of the discounted cumulative reward D? (x0 ) for the
SPSA and SF algorithms, respectively. Figure 3(a) shows the distribution of the average reward ? for
the algorithms in the average setting. From these plots, we notice that the risk-sensitive algorithms
7
0.2
0.16
SPSA
RS-SPSA
0.18
SF
RS-SF
0.14
0.16
0.12
Probability
Probability
0.14
0.12
0.1
0.08
0.1
0.08
0.06
0.06
0.04
0.04
0.02
0.02
0
20
25
30
35
40
D?(x0)
45
0
50
30
35
(a) SPSA vs. RS-SPSA
40
45
D?(x0)
50
55
(b) SF vs. RS-SF
Figure 2: Performance comparison in the discounted setting using the distribution of D? (x0 ).
0.1
30
AC
RS-AC
0.09
25
0.08
20
0.06
AJWT
Probability
0.07
0.05
15
0.04
10
0.03
0.02
5
RS-AC
AC
0.01
0
0
20
30
40
?
50
60
70
0
2000
4000
6000
8000
10000
time
(a) Distribution of ?
(b) Average junction waiting time
Figure 3: Comparison of AC vs. RS-AC in the average setting using two different metrics.
that we propose result in a long-term (discounted or average) reward that is higher than their riskneutral variants. However, from the empirical variance of the reward (both discounted as well as
average) perspective, the risk-sensitive algorithms outperform their risk-neutral variants.
We use average junction waiting time (AJWT) to compare the algorithms from a traffic signal control
application standpoint. Figure 3(b) presents the AJWT plots for the algorithms in the average setting
(see Appendix F of [17] for similar results for the SPSA and SF algorithms in the discounted setting).
We observe that the performance of our risk-sensitive algorithms is not significantly worse than their
risk-neutral counterparts. This coupled with the observation that our algorithms exhibit low variance,
makes them a suitable choice in risk-constrained systems.
8
Conclusions and Future Work
We proposed novel actor critic algorithms for control in risk-sensitive discounted and average reward
MDPs. All our algorithms involve a TD critic on the fast timescale, a policy gradient (actor) on
the intermediate timescale, and dual ascent for Lagrange multipliers on the slowest timescale. In
the discounted setting, we pointed out the difficultly in estimating the gradient of the variance of
the return and incorporated simultaneous perturbation based SPSA and SF approaches for gradient
estimation in our algorithms. The average setting, on the other hand, allowed for an actor to employ
compatible features to estimate the gradient of the variance. We provided proofs of convergence
(in the appendix of [17]) to locally (risk-sensitive) optimal policies for all the proposed algorithms.
Further, using a traffic signal control application, we observed that our algorithms resulted in lower
variance empirically as compared to their risk-neutral counterparts.
In this paper, we established asymptotic limits for our discounted and average reward risk-sensitive
actor-critic algorithms. To the best of our knowledge, there are no convergence rate results available
for multi-timescale stochastic approximation schemes and hence for actor-critic algorithms. This is
true even for the actor-critic algorithms that do not incorporate any risk criterion. It would be an
interesting research direction to obtain finite-time bounds on the quality of the solution obtained by
these algorithms.
8
References
[1] D. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
[2] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, 45
(11):2471?2482, 2009.
[3] S. Bhatnagar, H. Prasad, and L.A. Prashanth. Stochastic Recursive Algorithms for Optimization, volume
434. Springer, 2013.
[4] V. Borkar. A sensitivity formula for the risk-sensitive cost and the actor-critic algorithm. Systems &
Control Letters, 44:339?346, 2001.
[5] V. Borkar. Q-learning for risk-sensitive control. Mathematics of Operations Research, 27:294?311, 2002.
[6] H. Chen, T. Duncan, and B. Pasik-Duncan. A Kiefer-Wolfowitz algorithm with randomized differences.
IEEE Transactions on Automatic Control, 44(3):442?453, 1999.
[7] E. Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, 58(1):203?213, 2010.
[8] J. Filar, L. Kallenberg, and H. Lee. Variance-penalized Markov decision processes. Mathematics of
Operations Research, 14(1):147?161, 1989.
[9] J. Filar, D. Krass, and K. Ross. Percentile performance criteria for limiting average Markov decision
processes. IEEE Transaction of Automatic Control, 40(1):2?10, 1995.
[10] R. Howard and J. Matheson. Risk sensitive Markov decision processes. Management Science, 18(7):
356?369, 1972.
[11] V. Katkovnik and Y. Kulchitsky. Convergence of a class of random search algorithms. Automatic Remote
Control, 8:81?87, 1972.
[12] A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780?798, 2005.
[13] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proceedings of the Sixteenth European
Conference on Machine Learning, pages 280?291, 2005.
[14] L.A. Prashanth and S. Bhatnagar. Reinforcement learning with average cost for adaptive control of traffic
lights at intersections. In Proceedings of the Fourteenth International IEEE Conference on Intelligent
Transportation Systems, pages 1640?1645. IEEE, 2011.
[15] L.A. Prashanth and S. Bhatnagar. Reinforcement Learning With Function Approximation for Traffic
Signal Control. IEEE Transactions on Intelligent Transportation Systems, 12(2):412?421, june 2011.
[16] L.A. Prashanth and S. Bhatnagar. Threshold Tuning Using Stochastic Optimization for Graded Signal
Control. IEEE Transactions on Vehicular Technology, 61(9):3865?3880, Nov. 2012.
[17] L.A. Prashanth and M. Ghavamzadeh. Actor-Critic Algorithms for Risk-Sensitive MDPs. Technical
report inria-00794721, INRIA, 2013.
[18] W. Sharpe. Mutual fund performance. Journal of Business, 39(1):119?138, 1966.
[19] M. Sobel. The variance of discounted Markov decision processes. Applied Probability, pages 794?802,
1982.
[20] J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation.
IEEE Transactions on Automatic Control, 37(3):332?341, 1992.
[21] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In Proceedings of Advances in Neural Information Processing Systems 12,
pages 1057?1063, 2000.
[22] A. Tamar, D. Di Castro, and S. Mannor. Policy gradients with variance related risk criteria. In Proceedings
of the Twenty-Ninth International Conference on Machine Learning, pages 387?396, 2012.
[23] A. Tamar, D. Di Castro, and S. Mannor. Temporal difference methods for the variance of the reward to go.
In Proceedings of the Thirtieth International Conference on Machine Learning, pages 495?503, 2013.
[24] H. Xu and S. Mannor. Distributionally robust Markov decision processes. Mathematics of Operations
Research, 37(2):288?300, 2012.
9
| 4917 |@word briefly:1 open:1 r:10 simulation:5 prasad:1 p0:4 q1:1 mention:1 initial:5 celebrated:1 configuration:1 current:1 si:2 written:1 plot:2 update:20 fund:1 v:3 stationary:2 mannor:4 along:1 differential:6 prove:1 manner:2 x0:68 ascend:1 expected:5 planning:1 spsa:22 multi:1 bellman:4 discounted:32 td:9 curse:1 cardinality:1 project:1 estimating:3 bounded:1 moreover:1 notation:1 underlying:1 provided:1 what:1 minimizes:1 developed:1 temporal:3 every:3 ti:3 stateaction:2 finance:2 control:19 bertsekas:1 positive:1 t1:1 local:2 limit:1 sutton:2 path:2 approximately:1 inria:5 initialization:1 studied:1 challenging:1 fastest:2 recursive:1 implement:2 procedure:4 delage:1 empirical:1 significantly:1 projection:3 road:4 close:1 operator:4 risk:48 context:4 descending:1 optimize:2 lagrangian:7 dz:3 maximizing:4 transportation:2 straightforward:2 attention:1 go:1 convex:1 formulate:1 rule:2 estimator:3 financial:1 notion:1 variation:1 updated:1 limiting:1 user:1 programming:3 prioritization:1 us:2 updating:2 observed:2 descend:1 calculate:4 grabbed:1 episode:1 remote:1 mentioned:1 balanced:2 environment:1 reward:41 dynamic:2 ghavamzadeh:4 singh:1 easily:2 represented:1 various:1 chapter:1 fast:1 whose:1 widely:1 solve:3 larger:1 g1:1 timescale:7 itself:1 ip:3 differentiate:1 advantage:4 differentiable:1 propose:2 interaction:1 turned:1 matheson:1 sixteenth:1 convergence:8 r1:2 rademacher:3 leave:1 converges:2 help:1 derive:5 develop:3 ac:8 received:1 resemble:1 implies:1 come:2 involves:1 direction:5 owing:1 stochastic:12 mcallester:1 require:2 hx:2 preliminary:1 mathematically:1 extension:1 considered:2 substituting:1 estimation:1 ross:1 sensitive:32 gaussian:4 always:1 aim:2 rather:1 avoid:1 thirtieth:1 june:1 schaal:1 mainly:2 slowest:3 dependent:1 el:1 bt:10 a0:2 selects:1 issue:1 among:2 arg:3 overall:2 denoted:1 dual:1 constrained:3 integration:1 mutual:1 construct:1 sampling:1 identical:1 lille:3 future:1 report:1 spall:1 intelligent:2 inherent:1 employ:5 primarily:1 resulted:1 attempt:2 highly:1 sharpe:2 light:1 behind:2 sobel:3 tuple:1 integral:2 necessary:2 uncertain:1 instance:1 ordinary:1 cost:5 neutral:6 usefulness:2 delay:2 density:2 international:3 sensitivity:1 randomized:1 vijayakumar:1 sequel:3 lee:2 continuously:1 manage:1 management:1 worse:1 return:7 de:2 summarized:1 sec:1 satisfy:2 stream:1 vehicle:2 traffic:10 red:1 start:1 actorcritic:1 prashanth:6 contribution:1 minimize:3 square:11 kiefer:1 variance:20 who:1 conceptually:1 bhatnagar:5 trajectory:4 history:1 simultaneous:9 difficultly:1 definition:2 frequency:1 proof:4 associated:3 di:2 adjusting:1 popular:1 recall:1 lim:2 ut:15 dimensionality:1 knowledge:1 schedule:3 appears:1 higher:1 specify:1 formulation:8 done:1 stage:1 hand:1 nonlinear:1 lack:1 quality:1 resemblance:1 scientific:1 mdp:7 multiplier:5 unbiased:4 counterpart:2 true:1 hence:1 equality:1 illustrated:1 deal:1 interchangeably:1 numerator:1 percentile:3 criterion:17 complete:1 mohammad:2 demonstrate:2 tn:1 novel:1 recently:1 common:2 functional:2 rl:2 empirically:1 volume:1 belong:1 measurement:1 automatic:4 unconstrained:2 grid:2 mathematics:3 similarly:1 pointed:1 tuning:1 actor:31 v0:1 closest:2 multivariate:1 perspective:1 optimizing:3 vt:30 devise:3 seen:1 additional:1 determine:1 maximize:2 shortest:2 wolfowitz:1 signal:9 u0:1 ii:2 technical:1 faster:1 long:5 adobe:2 qi:3 involving:1 variant:2 expectation:2 poisson:1 metric:1 iteration:2 achieved:1 addition:1 want:1 ode:1 standpoint:1 biased:2 unlike:1 ascent:2 induced:1 subject:3 incorporates:2 flow:2 intermediate:3 easy:1 enough:1 aggravated:1 reduce:1 idea:1 tamar:4 expression:2 motivated:1 utility:2 penalty:1 queue:3 peter:1 action:16 remark:1 clear:1 involve:2 locally:3 outperform:1 notice:2 sign:1 per:2 waiting:2 threshold:1 changing:2 kallenberg:1 relaxation:3 sum:5 convert:2 run:3 fourteenth:1 parameterized:3 uncertainty:4 letter:1 place:1 draw:1 decision:13 prefer:1 appendix:14 vb:11 duncan:2 bound:1 b2t:1 encountered:1 ri:2 x2:2 lane:3 min:2 vehicular:1 according:5 making:4 s1:2 castro:2 pcan:1 pr:2 ghaoui:1 sided:5 equation:7 turn:1 needed:1 tractable:1 ascending:1 end:1 available:2 operation:7 junction:2 observe:5 indirectly:1 occurrence:1 subtracted:1 slower:1 running:1 ensure:2 calculating:2 especially:1 establish:2 graded:1 objective:4 question:1 quantity:1 added:1 strategy:1 rt:5 usual:2 visiting:1 exhibit:1 gradient:36 dp:2 perturbs:1 separate:1 simulated:1 athena:1 topic:3 assuming:2 length:3 modeled:1 filar:2 ratio:1 minimizing:1 unfortunately:1 policy:47 unknown:1 boltzmann:1 perform:1 twenty:1 convolution:1 observation:2 markov:9 howard:1 finite:1 situation:1 extended:2 variability:10 team:3 incorporated:1 krass:1 perturbation:10 mansour:1 smoothed:2 ninth:1 community:1 introduced:1 pair:6 optimized:1 elapsed:3 established:1 including:1 max:8 suitable:1 difficulty:1 natural:2 business:1 scheme:2 improve:1 technology:1 mdps:11 ready:1 coupled:1 asymptotic:2 interesting:1 ingredient:1 agent:3 critic:37 compatible:3 penalized:1 bias:1 katkovnik:1 template:1 dimension:1 transition:2 cumulative:1 qn:1 author:1 reinforcement:4 adaptive:1 transaction:5 nov:1 compact:1 monotonicity:1 automatica:1 search:1 why:2 nature:1 learn:1 robust:3 european:1 main:3 rh:3 motivation:2 s2:4 allowed:1 categorized:1 xu:1 experienced:2 nilim:1 exponential:2 sf:21 candidate:1 formula:4 xt:56 r2:4 list:1 sequential:3 conditioned:1 horizon:2 chen:1 intersection:1 borkar:3 saddle:4 lagrange:5 springer:1 satisfies:2 goal:3 formulated:1 prioritized:1 replace:2 absence:1 feasible:1 hard:1 infinite:2 lemma:6 total:1 experimental:1 meaningful:1 distributionally:1 assessed:1 incorporate:2 evaluate:1 |
4,329 | 4,918 | Learning from Limited Demonstrations
Beomjoon Kim
School of Computer Science
McGill University
Montreal, Quebec, Canada
Amir-massoud Farahmand
School of Computer Science
McGill University
Montreal, Quebec, Canada
Joelle Pineau
School of Computer Science
McGill University
Montreal, Quebec, Canada
Doina Precup
School of Computer Science
McGill University
Montreal, Quebec, Canada
Abstract
We propose a Learning from Demonstration (LfD) algorithm which leverages expert data, even if they are very few or inaccurate. We achieve this by using both
expert data, as well as reinforcement signals gathered through trial-and-error interactions with the environment. The key idea of our approach, Approximate Policy
Iteration with Demonstration (APID), is that expert?s suggestions are used to define linear constraints which guide the optimization performed by Approximate
Policy Iteration. We prove an upper bound on the Bellman error of the estimate
computed by APID at each iteration. Moreover, we show empirically that APID
outperforms pure Approximate Policy Iteration, a state-of-the-art LfD algorithm,
and supervised learning in a variety of scenarios, including when very few and/or
suboptimal demonstrations are available. Our experiments include simulations as
well as a real robot path-finding task.
1
Introduction
Learning from Demonstration (LfD) is a practical framework for learning complex behaviour policies from demonstration trajectories produced by an expert. In most conventional approaches to
LfD, the agent observes mappings between states and actions in the expert trajectories, and uses supervised learning to estimate a function that can approximately reproduce this mapping. Ideally, the
function (i.e., policy) should also generalize well to regions of the state space that are not observed
in the demonstration data. Many of the recent methods focus on incrementally querying the expert in
appropriate regions of the state space to improve the learned policy, or to reduce uncertainty [1, 2, 3].
Key assumptions of most these works are that (1) the expert exhibits optimal behaviour, (2) the expert demonstrations are abundant, and (3) the expert stays with the learning agent throughout the
training. In practice, these assumptions significantly reduce the applicability of LfD.
We present a framework that leverages insights and techniques from the reinforcement learning
(RL) literature to overcome these limitations of the conventional LfD methods. RL is a general
framework for learning optimal policies from trial-and-error interactions with the environment [4,
5]. The conventional RL approaches alone, however, might have difficulties in achieving a good
performance from relatively little data. Moreover, they are not particularly cautious to risk involved
in trial-and-error learning, which could lead to catastrophic failures. A combination of both expert
and interaction data (i.e., mixing LfD and RL), however, offers a tantalizing way to effectively
address challenging real-world policy learning problems under realistic assumptions.
Our primary contribution is a new algorithmic framework that integrates LfD, tackled using a large
margin classifier, with a regularized Approximate Policy Iteration (API) method. The method is
1
formulated as a coupled constraint convex optimization, in which expert demonstrations define a
set of linear constraints in API. The optimization is formulated in a way that permits mistakes in
the demonstrations provided by the expert, and also accommodates variable availability of demonstrations (i.e., just an initial batch or continued demonstrations). We provide a theoretical analysis
describing an upper bound on the Bellman error achievable by our approach.
We evaluate our algorithm in a simulated environment under various scenarios, such as varying the
quality and quantity of expert demonstrations. In all cases, we compare our algorithm with LeastSquare Policy Iteration (LSPI) [6], a popular API method, as well as with a state-of-the-art LfD
method, Dataset Aggregation (DAgger) [1]. We also evaluate the algorithm?s practicality in a real
robot path finding task, where there are few demonstrations, and exploration data is expensive due
to limited time. In all of the experiments, our method outperformed LSPI, using fewer exploration
data and exhibiting significantly less variance. Our method also significantly outperformed Dataset
Aggregation (DAgger), a state-of-art LfD algorithm, in cases where the expert demonstrations are
few or suboptimal.
2
Proposed Algorithm
We consider a continuous-state, finite-action discounted MDP (X , A, P, R, ?), where X is a
measurable state space, A is a finite set of actions, P : X ? A ? M(X ) is the transition
model, R : X ? A ? M(R) is the reward model, and ? ? [0, 1) is a discount factor.1 Let
r(x, a) = E [R(?|x, a)], and assume that r is uniformly bounded by Rmax . A measurable mapping
? : X ? A is called a policy. As usual, V ? and Q? denote the value and action-value function for
?, and V ? and Q? denote the corresponding value functions for the optimal policy ? ? [5].
Our algorithm is couched in the framework of API [7]. A standard API algorithm starts with an
initial policy ?0 . At the (k + 1)th iteration, given a policy ?k , the algorithm approximately evaluates
? k , usually as an approximate fixed point of the Bellman operator T ?k : Q
? k ? T ?k Q
? k .2
?k to find Q
This is called the approximate policy evaluation step. Then, a new policy ?k+1 is computed, which
? k . There are several variants of API that mostly differ on how the approxis greedy with respect to Q
imate policy evaluation is performed. Most methods attempt to exploit the structures in the value
function [8, 9, 10, 11], but in some problems one might have extra information about the structure
of good or optimal policies as well. This is precisely our case, since we have expert demonstrations.
To develop the algorithm, we start with regularized Bellman error minimization, which is a common
flavour of policy evaluation used in API. Suppose that we want to evaluate a policy ? given a batch
of data DRL = {(Xi , Ai )}ni=1 containing n examples, and that the exact Bellman operator T ? is
? is computed as:
known. Then, the new value function Q
? ? argmin kQ ? T ? Qk2 + ?J 2 (Q),
Q
n
(1)
Q?F |A|
where F |A| is the set of action-value functions, the first term is the squared Bellman error evaluated
on the data,3 J 2 (Q) is the regularization penalty, which can prevent overfitting when F |A| is complex, and ? > 0 is the regularization coefficient. The regularizer J(Q) measures the complexity of
function Q. Different choices of F |A| and J lead to different notions of complexity, e.g., various
definitions of smoothness, sparsity in a dictionary, etc. For example, F |A| could be a reproducing
kernel Hilbert space (RKHS) and J its corresponding norm, i.e., J(Q) = kQkH .
In addition to DRL , we have a set of expert examples DE = {(Xi , ?E (Xi ))}m
i=1 , which we would
like to take into account in the optimization process. The intuition behind our algorithm is that
we want to use the expert examples to ?shape? the value function where they are available, while
using the RL data to improve the policy everywhere else. Hence, even if we have few demonstration
examples, we can still obtain good generalization everywhere due to the RL data.
To incorporate the expert examples in the algorithm one might require that at the states Xi from DE ,
the demonstrated action ?E (Xi ) be optimal, which can be expressed as a large-margin constraint:
1
For a space ? with ?-algebra ?? , M(?) denotes the setPof all probability measures over ?? .
?k
For discrete state spaces,
a) + ? x0 P (x0 |x, a)Q(x0 , ?k (x0 )).
Pn (T Q)(x, a) = r(x,
3
?
2
?
1
kQ ? T Qkn , n i=1 |Q(Xi , Ai ) ? (T Q)(Xi , Ai )|2 with (Xi , Ai ) from DRL .
2
2
Q(Xi , ?E (Xi )) ? maxa?A\?E (Xi ) Q(Xi , a) ? 1. However, this might not always be feasible, or
desirable (if the expert itself is not optimal), so we add slack variables ?i ? 0 to allow occasional
violations of the constraints (similar to soft vs. hard margin in the large-margin literature [12]). The
policy evaluation step can then be written as the following constrained optimization problem:
m
??
Q
2
argmin
Q?F |A| ,??Rm
+
s.t.
kQ ? T ? Qkn + ?J 2 (Q) +
Q(Xi , ?E (Xi )) ?
max
a?A\?E (Xi )
? X
?i
m i=1
Q(Xi , a) ? 1 ? ?i .
(2)
for all (Xi , ?E (Xi )) ? DE
The parameter ? balances the influence of the data obtained by the RL algorithm (generally by trialand-error) vs. the expert data. When ? = 0, we obtain (1), while when ? ? ?, we essentially
solve a structured classification problem based on the expert?s data [13]. Note that the right side of
the constraints could also be multiplied by a coefficient ?i > 0, to set the size of the acceptable
margin between the Q(Xi , ?E (Xi )) and maxa?A\?E (Xi ) Q(Xi , a). Such a coefficient can then be
set adaptively for different examples. However, this is beyond the scope of the paper.
The above constrained optimization problem is equivalent to the following unconstrained one:
m
X
? ? argmin kQ ? T ? Qk2 + ?J 2 (Q) + ?
Q
1
?
Q(X
,
?
(X
))
?
max
Q(X
,
a)
(3)
i
E
i
i
n
a?A\?E (Xi )
m i=1
+
Q?F |A|
where [1 ? z]+ = max{0, 1 ? z} is the hinge loss.
In many problems, we do not have access to the exact Bellman operator T ? , but only to samples DRL = {(Xi , Ai , Ri , Xi0 )}ni=1 with Ri ? R(?|Xi , Ai ) and Xi0 ? P (?|Xi , Ai ). In this
case, one might want to use the empirical Bellman error kQ ? T?? Qk2n (with (T?? Q)(Xi , Ai ) ,
Ri + ?Q(Xi0 , ?(Xi0 )) for 1 ? i ? n) instead of kQ ? T ? Qk2n . It is known, however, that this is a
biased estimate of the Bellman error, and does not lead to proper solutions [14]. One approach to
address this issue is to use the modified Bellman error [14]. Another approach is to use Projected
Bellman error, which leads to an LSTD-like algorithm [8]. Using the latter idea, we formulate our
optimization as:
m
2
? X
? Q
? ? argmin
Q
?i
(4)
Q ? h
+ ?J 2 (Q) +
m i=1
n
Q?F |A| ,??Rm
+
2
?
2
? Q = argmin
?
s.t.
h
?
T
Q
+
?
J
(h)
h
h
n
h?F |A|
Q(Xi , ?E (Xi )) ?
max
a?A\?E (Xi )
Q(Xi , a) ? 1 ? ?i .
for all (Xi , ?E (Xi )) ? DE
? Q , which might be different from ?. For some
Here ?h > 0 is the regularization coefficient for h
|A|
? Q can be found in closedchoices of the function space F
and the regularizer J, the estimate h
form. For example, one can use linear function approximators h(?) = ?(?)> u and Q(?) = ?(?)> w
where u, w ? Rp are parameter vectors and ?(?) ? Rp is a vector of p linearly independent basis
functions defined over the space of state-action pairs. Using L2 -regularization, J 2 (h) = u> u and
J 2 (Q) = w> w, the best parameter vector u? can be obtained as a function of w by solving a ridge
regression problem:
?1
u? (w) = ?> ? + n?h I
?> (r + ??0 w),
where ?, ?0 and r are the feature matrices and reward vector, respectively: ? =
>
>
>
(?(Z1 ), . . . , ?(Zn )) , ?0 = (?(Z10 ), . . . , ?(Zn0 )) , r = (R1 , . . . , Rn ) , with Zi = (Xi , Ai )
and Zi0 = (Xi0 , ?(Xi0 )) (for data belonging to DRL ). More generally, as discussed above, we might
choose the function space F |A| to be a reproducing kernel Hilbert space (RKHS) and J to be its
corresponding norm, which provides the flexibility of working with a nonparametric representation
? Q . We do not provide the detail of formulation here
while still having a closed-form solution for h
due to space constraints.
The approach presented so far tackles the policy evaluation step of the API algorithm. As usual
in API, we alternate this step with the policy improvement step (i.e., greedification). The resulting
algorithm is called Approximate Policy Iteration with Demonstration (APID).
3
Up to this point, we have left open the problem of how the datasets DRL and DE are obtained. These
datasets might be regenerated at each iteration, or they might be reused, depending on the availability
of the expert and the environment. In practice, if the expert data is rare, DE will be a single fixed
batch, but DRL could be increased, e.g., by running the most current policy (possibly with some
exploration) to collect more data. The approach used should be tailored to the application. Note
that the values of the regularization coefficients as well as ? should ideally change from iteration to
iteration as a function of the number of samples as well as the value function Q?k . The choice of
these parameters might be automated by model selection [15].
3
Theoretical Analysis
? to the optimization
In this section we focus on the k th iteration of APID and consider the solution Q
? Such an
problem (2). The theoretical contribution is an upper bound on the true Bellman error of Q.
upper bound allows us to use error propagation results [16, 17] to provide a performance guarantee
on the value of the outcome policy ?K (the policy obtained after K iterations of the algorithm)
compared to the optimal value function V ? . We make the following assumptions in our analysis.
Assumption A1 (Sampling) DRL contains n independent and identically distributed (i.i.d.) samples
i.i.d.
(Xi , Ai ) ? ?RL ? M(X ? A) where ?RL is a fixed distribution (possibly dependent on k) and
i.i.d.
the states in DE = {(Xi , ?E (Xi )}m
i=1 are also drawn i.i.d. Xi ? ?E ? M(X ) from an expert
distribution ?E . DRL and DE are independent from each other. We denote N = n + m.
|A|
Assumption A2 (RKHS) The function space
function K :
n F Pis an RKHS defined by a kernel
o
N
|A|
N
(X ? A) ? (X ? A) ? R, i.e., F
= z 7? i=1 wi K(z, Zi ) : w ? R
with {Zi }N
i=1 =
DRL ? DE . We assume that supz?X ?A K (z, z) ? 1. Moreover, the function space F |A| is Qmax bounded.
Assumption A3 (Function Approximation Property) For any policy ?, Q? ? F |A| .
Assumption A4 (Expansion of Smoothness) For all Q ? F |A| , there exist constants 0 ? LR , LP <
?, depending only on the MDP and F |A| , such that for any policy ?, J(T ? Q) ? LR + ?LP J(Q).
Assumption A5 (Regularizers) The regularizer functionals J : B(X ) ? R and J : B(X ? A) ?
R are pseudo-norms on F and F |A| , respectively,4 and for all Q ? F |A| and a ? A, we have
J(Q(?, a)) ? J(Q).
Some of these assumptions are quite mild, while some are only here to simplify the analysis, but
are not necessary for practical application of the algorithm. For example, the i.i.d. assumption A1
can be relaxed using independent block technique [18] or other techniques to handle dependent data,
e.g., [19]. The method is certainly not specific to RKHS (Assumption A2), so other function spaces
can be used without much change in the proof. Assumption A3 holds for ?rich? enough function
spaces, e.g., universal kernels satisfy it for reasonable Q? . Assumption A4 ensures that if Q ? F |A|
then T ? Q ? F |A| . It holds if F |A| is rich enough and the MDP is ?well-behaving?. Assumption A5
is mild and ensures that if we control the complexity of Q ? F |A| , the complexity of Q(?, a) ? F
is controlled too. Finally, note that focusing on the case when we have access to the true Bellman
operator simplifies the analysis while allowing us to gain more understanding about APID. We are
now ready to state the main theorem of this paper.
? be the solution to the optimization problem (2) with the
Theorem 1. For any fixed policy ?, let Q
choice of ? > 0 and ? > 0. If Assumptions A1?A5 hold, for any n, m ? N and 0 < ? < 1, with
probability at least 1 ? ? we have
4
B(X ) and B(X ? A) denote the space of bounded measurable functions defined on X and X ? A. Here
we are slightly abusing notation as the same symbol is used for the regularizer over both spaces. However, this
should not cause any confusion since the identity of the regularizer should always be clear from the context.
4
? 2
?
2
+?
n + m (1 + ?LP ) Rmax
?
? ?
?
+ LR +
Q ? T Q
? 64Qmax
n
?RL
?
(
"
#
?
?
min 2?EX??E 1 ? Q (X, ?E (X)) ?
max
Q (X, a))
+ ?J 2 (Q? ),
a?A\?E (X)
2 kQ
?E
?T
?
Q?E k2?RL
)
2
?E
?J (Q
)
"
+ 2?EX??E
r
+
4Q2max
?E
1? Q
6 ln(4/?)
2 ln(4/?)
+
n
n
+
(X, ?E (X)) ?
!
+?
max
?E
Q
a?A\?E (X)
#
(X, a))
+
+
20(1 + 2Qmax ) ln(8/?)
.
3m
The proof of this theorem is in the supplemental material. Let us now discuss some aspects of
the result. The theorem guarantees that when the amount of RL data is large enough (n m),
we indeed minimize the Bellman error if we let ? ? 0. In that case, the upper bound would
2
be OP ( ?1n? ) + min{?J 2 (Q? ), 2 kQ?E ? T ? Q?E k?RL + ?J 2 (Q?E )}. Considering only the first
term inside min, the upper bound is minimized by the choice of ? = [n1/3 J 4/3 (Q? )]?1 , which
leads to OP (J 2/3 (Q? ) n?1/3 ) behaviour of the upper bound. The bound shows that the difficulty of
learning depends on J(Q? ), which is the complexity of the true (but unknown) action-value function
Q? measured according to J in F |A| . Note that Q? might be ?simple? with respect to some choice
of function space/regularizer, but complex in another one. The choice of F |A| and J reflects prior
knowledge regarding the function space and complexity measure that are suitable.
When the number of samples n increases, we can afford to increase the size of the function space
by making ? smaller. Since we have two terms inside min, the complexity of the problem might
2
actually depend on 2 kQ?E ? T ? Q?E k?RL +?J 2 (Q?E ), which is the Bellman error of Q?E (the true
action-value function of the expert) according to ? plus the complexity of Q?E in F |A| . Roughly
speaking, if ? is close to ?E , the Bellman error would be small. Two remarks are in order. First, this
result does not provide a proper upper bound on the Bellman error when m dominates n. This is
to be expected, because if ? is quite different from ?E and we do not have enough samples in DRL ,
we cannot guarantee that the Bellman error, which is measured according to ?, will be small. But,
one can still provide a guarantee by choosing a large ? and using a margin-based error bound (cf.
Section 4.1 of [20]). Second, this upper bound is not optimal, as we use a simple proof technique
based on controlling the supremum of the empirical process. More advanced empirical processes
techniques can be used to obtain a faster error rate (cf. [12]).
4
Experiments
We evaluate APID on a simulated domain, as well as a real robot path-finding task. In the simulated
environment, we compare APID against other benchmarks under varying availability and optimality
of the expert demonstrations. In the real robot task, we evaluate the practicality of deploying APID
on a live system, especially when DRL and DE are both expensive to obtain.
4.1 Car Brake Control Simulation
In the vehicle brake control simulation [21], the agent?s goal is reach a target velocity, then maintain
that target. It can either press the acceleration pedal or the brake pedal, but not both simultaneously.
A state is represented by four continuous-valued features: target and current velocities, current
positions of brake pedal and acceleration pedal. Given a state, the learned policy has to output one
of five actions: acceleration up, acceleration down, brake up, brake down, do nothing. The reward
is ?10 times the error in velocity. The initial velocity is set to 2m/s, and the target velocity is set
to 7m/s. The expert was implemented using the dynamics between the pedal pressure and output
velocity, from which we calculate the optimal velocity at each state. We added random noise to the
dynamics to simulate a realistic scenario, in which the output velocity is governed by factors such
as friction and wind. The agent has no knowledge of the dynamics, and receives only DE and DRL .
For all experiments, we used a linear Radial Basis Function (RBF) approximator for the value function and CVX, a package for specifying and solving convex programs [22], to solve the optimization
5
20
10
10
0
0
?10
?10
Average Rewards
Average Rewards
20
?20
APID
LSPI
Supervised
?30
?40
?20
?30
?40
?50
?50
?60
?60
?70
?70
?80
1
2
3
4
5
6
7
8
9
?80
10
Number of Iterations
APID
LSPI
Supervised
1
2
3
4
5
6
7
8
9
10
Number of Iterations
(a)
(b)
Figure 1: (a) Average reward with m = 15 optimal demonstrations. (b) Average reward with
m = 100 sub-optimal demonstrations. Each iteration adds 500 new RL data to APID and LSPI,
while the expert data stays the same. First iteration has n = 500 ? m for APID. LSPI treats all the
data at this iteration RL data.
?
to 1 if the expert is optimal and 0.01 otherwise. The regularization paramproblem (4). We set m
?
eter was set according to 1/ n + m. We averaged results over 10 runs and computed confidence
intervals as well. We compare APID with the regularized version of LSPI [6] in all the experiments.
Depending on the availability of expert data, we either compare APID with the standard supervised
LfD, or DAgger [1], a state-of-the-art LfD algorithm that has strong theoretical guarantees and good
empirical performance when the expert is optimal. DAgger is designed to query for more demonstrations at each iteration; then, it aggregates all demonstrations and trains a new policy. The number
of queries in DAgger increases linearly with the task horizon. For Dagger and supervised LfD, we
use random forests to train the policy.
We first consider the case with little but optimal expert data, with task horizon 1000. At each
iteration, the agent gathers more RL data using a random policy. In this case, shown in Figure 1a,
LSPI performs worse than APID on average, and it also has much higher variance, especially when
DRL is small. This is consistent with empirical results in [6], in which LSPI showed significant
variance even for simple tasks. In the first iteration, APID has moderate variance, but it is quickly
reduced in the next iteration. This is due to the fact that expert constraints impose a particular shape
to the value function, as noted in Section 2. The supervised LfD performs the worst, as the amount of
expert data is insufficient. Results for the case in which the agent has more but sub-optimal expert
data are shown in Figure 1b. Here, with probability 0.5 the expert gives a random action rather
than the optimal action. Compared to supervised LfD, APID and LSPI are both able to overcome
sub-optimality in the expert?s behaviour to achieve good performance, by leveraging the RL data.
Next, we consider the case of abundant demonstrations from a sub-optimal expert, who gives random
actions with probability 0.25, to characterize the difference between APID and DAgger. The task
horizon is reduced to 100, due to the number of demonstrations required by DAgger. As can be
seen in Figure 2a, the sub-optimal demonstrations cause DAgger to diverge, because it changes the
policy at each iteration, based on the newly aggregated sub-optimal demonstrations. APID, on the
other hand, is able to learn a better policy by leveraging DRL . APID also outperforms LSPI (which
uses the same DRL ), by generalizing from DE via function approximation. This result illustrates
well APID?s robustness to sub-optimal expert demonstrations. Figure 2b shows the result for the
case of optimal and abundant demonstrations. In this case, which fits Dagger?s assumptions, the
performance of APID is on par with that of DAgger.
4.2
Real Robot Path Finding
We now evaluate the practicality of APID on a real robot path-finding task and compare it with LSPI
and supervised LfD, using only one demonstrated trajectory. We do not assume that the expert is
optimal (and abundant), and therefore do not include DAgger, which was shown to perform poorly
for this case. In this task, the robot needs to get to the goal in an unmapped environment by learning
6
?4
?6
?6
?8
?8
Average Rewards
Average Rewards
?4
?10
APID
LSPI
DAgger
?12
?14
?10
?14
?16
?16
?18
?18
?20
1
2
3
4
5
6
7
8
9
10
?20
11
APID
LSPI
DAgger
?12
1
2
3
Number of Iterations
4
5
6
7
8
9
10
11
Number of Iterations
(a)
(b)
Figure 2: (a) Performance with a sub-optimal expert. (b) Performance with an optimal expert. Each
iteration (X-axis) adds 100 new expert data points to APID and DAgger. We use n = 3000 ? m for
APID. LSPI treats all data as RL data.
to avoid obstacles. We use an iRobot Create equipped with Kinect RGB-depth sensor and a laptop.
We encode the Kinect observations with 1 ? 3 grid cells (each 1m ? 1m). The robot also has three
bumpers to detect a collision from the front, left, and right. Figures 3a and 3b show a picture of the
robot and its environment. In order to reach the goal, the robot needs to turn left to avoid a first box
and wall on the right, while not turning too much, to avoid the couch. Next, the robot must turn
right to avoid a second box, but make sure not to turn too much or too soon to avoid colliding with
the wall or first box. Then, the robot needs to get into the hallway, turn right, and move forward to
reach the goal position; the goal position is 6m forward and 1.5m right from the initial position.
The state space is represented by 3 non-negative integer features to represent number of point clouds
produced by Kinect in each grid cell, and 2 continuous features (robot position). The robot has three
discrete actions: turn left, turn right, and move forward. The reward is minus the distance to the
goal, but if the robot?s front bumper is pressed and the robot moves forward, it receives a penalty
equal to 2 times the current distance to the goal. If the robot?s left bumper is pressed and the robot
does not turn right, and vice-versa, it also receives 2 times the current distance to the goal. The robot
outputs actions at a rate of 1.7Hz.
We started from a single trajectory of demonstration, then incrementally added only RL data. The
number of data points added varied at each iteration, but the average was 160 data points, which is
around 1.6 minutes of exploration using -greedy exploration policy (decreasing over iterations).
?
was set to 0.9, then
For 11 iterations, the training time was approximately 18 minutes. Initially, m
it was decreased as new data was acquired. To evaluate the performance of each algorithm, we ran
each iteration?s policy for a task horizon of 100 (? 1min); we repeated each iteration 5 times, to
compute the mean and standard deviation.
As shown in Figure 3c, APID outperformed both LSPI and supervised LfD; in fact, these two methods could not reach the goal. The supervised LfD kept running into the couch, as state distributions
of expert and robot differed, as noted in [1]. LSPI had a problem due to exploring unnecessary
states; specifically, the -greedy policy of LSPI explored regions of state space that were not relevant in learning the optimal plan, such as the far left areas from the initial position. -greedy policy
of APID, on the other hand, was able to leverage the expert data to efficiently explore the most relevant states and avoid unnecessary collisions. For example, it learned to avoid the first box in the first
iteration, then explored states near the couch, where supervised LfD failed. Table 1 gives the time it
took for the robot to reach the goal (within 1.5m). Only iterations 9, 10 and 11 of APID reached the
goal. Note that the times achieved by APID (iteration 11) are similar to the expert.
Table 1: Average time to reach the goal
Average Vals
Time To Goal(s)
Demonstration
35.9
APID-9th
38.4 ? 0.81
7
APID-10th
37.7 ? 0.84
APID-11th
36.1 ? 0.24
7
APID
LSPI
supervised
Distance to the Goal
6
5
4
3
2
1
0
0
2
4
6
8
10
12
Number of Iterations
(a)
(b)
(c)
Figure 3: (a) Picture of iRobot Create equipped with Kinect. (b) Top-down view of the environment.
The star represents the goal, the circle represents the initial position, black lines indicate walls, and
the three grid cells represent the vicinity of the Kinect. (c) Distance to the goal for LSPI, APID and
supervised LfD with random forest.
5
Discussion
We proposed an algorithm that learns from limited demonstrations by leveraging a state-of-the-art
API method. To our knowledge, this is the first LfD algorithm that learns from few and/or suboptimal
demonstrations. Most LfD methods focus on solving the issue of violation of i.i.d. data assumptions
by changing the policy slowly [23], by reducing the problem to online learning [1], by querying
the expert [2] or by obtaining corrections from the expert [3]. These methods all assume that the
expert is optimal or close-to-optimal, and demonstration data is abundant. The TAMER system [24]
uses rewards provided by the expert (and possibly blended with MDP rewards), instead of assuming
that an action choice is provided. There are a few Inverse RL methods that do not assume optimal
experts [25, 26], but their focus is on learning the reward function rather than on planning. Also,
these methods require a model of the system dynamics, which is typically not available in practice.
In the simulated environment, we compared our method with DAgger (a state-of-the-art LfD
method) as well as with a popular API algorithm, LSPI. We considered four scenarios: very few but
optimal demonstrations, a reasonable number of sub-optimal demonstrations, abundant sub-optimal
demonstrations, and abundant optimal demonstrations. In the first three scenarios, which are more
realistic, our method outperformed the others. In the last scenario, in which the standard LfD assumptions hold, APID performed just as well as DAgger. In the real robot path-finding task, our
method again outperformed LSPI and supervised LfD. LSPI suffered from inefficient exploration,
and supervised LfD was affected by the violation of the i.i.d. assumption, as pointed out in [1].
We note that APID accelerated API by utilizing demonstration data. Previous approaches [27, 28]
accelerated policy search, e.g. by using LfD to find initial policy parameters. In contrast, APID
leverages the expert data to shape the policy throughout the planning.
The most similar to our work, in terms of goals, is [29], where the agent is given multiple suboptimal trajectories, and infers a hidden desired trajectory using Expectation Maximization and
Kalman Filtering. However, their approach is less general, as it assumes a particular noise model in
the expert data, whereas APID is able to handle demonstrations that are sub-optimal non-uniformly
along the trajectory.
In future work, we will explore more applications of APID and study its behaviour with respect to
?i . For instance, in safety-critical applications, large ?i could be used at critical states.
Acknowledgements
Funding for this work was provided by the NSERC Canadian Field Robotics Network, Discovery Grants Program, and Postdoctoral Fellowship Program, as well as by the CIHR (CanWheel team), and the FQRNT (Regroupements strat?egiques INTER et REPARTI).
8
References
[1] S. Ross, G. Gordon, and J. A. Bagnell. A reduction of imitation learning and structured prediction to
no-regret online learning. In AISTATS, 2011. 1, 2, 6, 7, 8
[2] S. Chernova and M. Veloso. Interactive policy learning through confidence-based autonomy. Journal of
Artificial Intelligence Research, 34, 2009. 1, 8
[3] B. Argall, M. Veloso, and B. Browning. Teacher feedback to scaffold and refine demonstrated motion
primitives on a mobile robot. Robotics and Autonomous Systems, 59(3-4), 2011. 1, 8
[4] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 1
[5] Cs. Szepesv?ari. Algorithms for Reinforcement Learning. Morgan Claypool Publishers, 2010. 1, 2
[6] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:
1107?1149, 2003. 2, 6
[7] D. P. Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control
Theory and Applications, 9(3):310?335, 2011. 2
[8] A.-m. Farahmand, M. Ghavamzadeh, Cs. Szepesv?ari, and S. Mannor. Regularized policy iteration. In
NIPS 21, 2009. 2, 3
[9] J. Z. Kolter and A. Y. Ng. Regularization and feature selection in least-squares temporal difference
learning. In ICML, 2009. 2
[10] G. Taylor and R. Parr. Kernelized value function approximation for reinforcement learning. In ICML,
2009. 2
[11] M. Ghavamzadeh, A. Lazaric, R. Munos, and M. Hoffman. Finite-sample analysis of Lasso-TD. In ICML,
2011. 2
[12] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. 3, 5
[13] I. Tsochantaridis, T. Joachims, T. Hofmann, Y. Altun, and Y. Singer. Large margin methods for structured
and interdependent output variables. Journal of Machine Learning Research, 6(2):1453?1484, 2006. 3
[14] A. Antos, Cs. Szepesv?ari, and R. Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71:89?129, 2008.
3
[15] A.-m. Farahmand and Cs. Szepesv?ari. Model selection in reinforcement learning. Machine Learning, 85
(3):299?332, 2011. 4
[16] R. Munos. Error bounds for approximate policy iteration. In ICML, 2003. 4
[17] A.-m. Farahmand, R. Munos, and Cs. Szepesv?ari. Error propagation for approximate policy and value
iteration. In NIPS 23, 2010. 4
[18] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of
Probability, 22(1):94?116, January 1994. 4
[19] P.-M. Samson. Concentration of measure inequalities for Markov chains and ?-mixing processes. The
Annals of Probability, 28(1):416?461, 2000. 4
[20] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: A survey of some recent advances.
ESAIM: Probability and Statistics, 9:323?375, 2005. 5
[21] T. Hester, M. Quinlan, and P. Stone. RTMBA: A real-time model-based reinforcement learning architecture for robot control. In ICRA, 2012. 5
[22] CVX Research, Inc. CVX: Matlab software for disciplined convex programming, version 2.0. http:
//cvxr.com/cvx, August 2012. 5
[23] S. Ross and J. A. Bagnell. Efficient reductions for imitation learning. In AISTATS, 2010. 8
[24] W. B Knox and P. Stone. Reinforcement learning from simultaneous human and MDP reward. In AAMAS,
2012. 8
[25] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In IJCAI, 2007. 8
[26] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning.
In AAAI, 2008. 8
[27] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In
NIPS 15, 2002. 8
[28] J. Kober and J. Peters. Policy search for motor primitives in robotics. Machine Learning, 84(1-2):171?
203, 2011. 8
[29] A. Coates, P. Abbeel, and A. Y. Ng. Learning for control from multiple demonstrations. In ICML, 2008.
8
9
| 4918 |@word mild:2 trial:3 version:2 trialand:1 achievable:1 norm:3 reused:1 open:1 simulation:3 rgb:1 pressure:1 pressed:2 minus:1 reduction:2 initial:7 contains:1 rkhs:5 outperforms:2 current:5 com:1 written:1 must:1 realistic:3 shape:3 hofmann:1 motor:2 designed:1 v:2 alone:1 greedy:4 fewer:1 intelligence:1 stationary:1 amir:2 hallway:1 scaffold:1 lr:3 provides:1 mannor:1 five:1 along:1 farahmand:4 prove:1 drl:16 inside:2 acquired:1 x0:4 inter:1 expected:1 indeed:1 roughly:1 planning:2 bellman:19 discounted:1 decreasing:1 td:1 little:2 equipped:2 considering:1 provided:4 moreover:3 bounded:3 notation:1 laptop:1 argmin:5 rmax:2 argall:1 maxa:2 supplemental:1 finding:6 guarantee:5 pseudo:1 temporal:1 lfd:27 tackle:1 interactive:1 classifier:1 rm:2 k2:1 control:6 grant:1 bertsekas:1 safety:1 treat:2 mistake:1 api:12 sutton:1 path:7 approximately:3 lugosi:1 might:12 plus:1 black:1 collect:1 challenging:1 specifying:1 limited:3 zi0:1 averaged:1 practical:2 practice:3 block:1 regret:1 area:1 empirical:6 universal:1 significantly:3 confidence:2 radial:1 altun:1 get:2 cannot:1 close:2 selection:3 operator:4 tsochantaridis:1 risk:1 influence:1 context:1 live:1 conventional:3 measurable:3 demonstrated:3 equivalent:1 brake:6 primitive:3 convex:3 survey:2 formulate:1 pure:1 insight:1 continued:1 supz:1 utilizing:1 handle:2 notion:1 autonomous:1 mcgill:4 controlling:1 suppose:1 target:4 annals:2 exact:2 programming:1 us:3 velocity:8 expensive:2 particularly:1 observed:1 cloud:1 worst:1 calculate:1 region:3 ensures:2 observes:1 ran:1 intuition:1 environment:9 complexity:8 reward:14 ideally:2 ziebart:1 dynamic:4 ghavamzadeh:2 depend:1 solving:3 algebra:1 basis:2 various:2 represented:2 regularizer:6 train:2 couch:3 query:2 artificial:1 aggregate:1 outcome:1 choosing:1 tamer:1 quite:2 solve:2 valued:1 otherwise:1 statistic:1 itself:1 online:2 sequence:1 took:1 propose:1 interaction:3 cihr:1 kober:1 relevant:2 mixing:3 flexibility:1 achieve:2 poorly:1 q2max:1 cautious:1 convergence:1 ijcai:1 r1:1 depending:3 develop:1 montreal:4 measured:2 op:2 school:4 strong:1 implemented:1 c:5 christmann:1 indicate:1 exhibiting:1 differ:1 exploration:6 human:1 material:1 require:2 behaviour:5 abbeel:1 generalization:1 wall:3 exploring:1 correction:1 hold:4 around:1 considered:1 claypool:1 mapping:3 algorithmic:1 scope:1 parr:2 dictionary:1 a2:2 integrates:1 outperformed:5 ross:2 vice:1 create:2 zn0:1 reflects:1 hoffman:1 minimization:2 mit:1 sensor:1 always:2 modified:1 rather:2 pn:1 avoid:7 varying:2 mobile:1 barto:1 encode:1 focus:4 joachim:1 improvement:1 schaal:1 contrast:1 kim:1 detect:1 browning:1 dependent:2 kqkh:1 inaccurate:1 typically:1 initially:1 hidden:1 kernelized:1 reproduce:1 issue:2 classification:2 plan:1 art:6 constrained:2 equal:1 field:1 having:1 ng:2 sampling:1 represents:2 yu:1 icml:5 future:1 regenerated:1 minimized:1 others:1 simplify:1 gordon:1 few:8 simultaneously:1 attractor:1 n1:1 maintain:1 attempt:1 a5:3 evaluation:5 certainly:1 violation:3 chernova:1 behind:1 antos:1 regularizers:1 chain:1 necessary:1 hester:1 taylor:1 abundant:7 circle:1 desired:1 theoretical:4 fitted:1 increased:1 instance:1 soft:1 obstacle:1 blended:1 zn:1 maximization:1 applicability:1 deviation:1 rare:1 kq:9 too:4 front:2 characterize:1 teacher:1 adaptively:1 knox:1 stay:2 diverge:1 precup:1 qk2:2 quickly:1 squared:1 again:1 aaai:1 containing:1 choose:1 possibly:3 slowly:1 worse:1 expert:53 inefficient:1 account:1 de:12 star:1 availability:4 coefficient:5 inc:1 satisfy:1 kolter:1 doina:1 depends:1 performed:3 vehicle:1 wind:1 closed:1 view:1 reached:1 dagger:17 aggregation:2 start:2 contribution:2 minimize:1 square:2 ni:2 variance:4 who:1 efficiently:1 gathered:1 landscape:1 generalize:1 bayesian:1 produced:2 trajectory:7 leastsquare:1 simultaneous:1 deploying:1 reach:6 definition:1 failure:1 evaluates:1 against:1 involved:1 proof:3 gain:1 newly:1 dataset:2 popular:2 knowledge:3 car:1 infers:1 hilbert:2 actually:1 focusing:1 higher:1 strat:1 supervised:16 disciplined:1 formulation:1 evaluated:1 box:4 dey:1 just:2 working:1 receives:3 hand:2 steinwart:1 ramachandran:1 propagation:2 incrementally:2 abusing:1 pineau:1 quality:1 mdp:5 true:4 regularization:7 hence:1 vicinity:1 boucheron:1 couched:1 noted:2 stone:2 ridge:1 confusion:1 performs:2 motion:1 funding:1 ari:5 lagoudakis:1 common:1 empirically:1 rl:21 discussed:1 xi0:6 significant:1 versa:1 ai:10 smoothness:2 unconstrained:1 grid:3 pointed:1 had:1 samson:1 robot:24 access:2 behaving:1 vals:1 etc:1 add:3 setpof:1 recent:2 showed:1 moderate:1 scenario:6 fqrnt:1 inequality:1 joelle:1 approximators:1 seen:1 morgan:1 relaxed:1 impose:1 aggregated:1 signal:1 multiple:2 desirable:1 faster:1 veloso:2 offer:1 nakanishi:1 a1:3 controlled:1 prediction:1 variant:1 regression:1 unmapped:1 essentially:1 expectation:1 iteration:41 kernel:4 tailored:1 represent:2 achieved:1 eter:1 cell:3 robotics:3 addition:1 want:3 whereas:1 fellowship:1 interval:1 decreased:1 else:1 szepesv:5 suffered:1 publisher:1 extra:1 biased:1 sure:1 hz:1 quebec:4 leveraging:3 integer:1 near:2 leverage:4 canadian:1 identically:1 enough:4 automated:1 variety:1 fit:1 zi:3 architecture:1 lasso:1 suboptimal:4 reduce:2 idea:2 simplifies:1 regarding:1 penalty:2 peter:1 speaking:1 cause:2 afford:1 action:16 remark:1 matlab:1 generally:2 collision:2 clear:1 amount:2 nonparametric:1 discount:1 reduced:2 http:1 exist:1 massoud:1 coates:1 lazaric:1 discrete:2 affected:1 key:2 four:2 achieving:1 drawn:1 changing:1 prevent:1 kept:1 run:1 package:1 everywhere:2 uncertainty:1 inverse:3 qmax:3 throughout:2 reasonable:2 cvx:4 acceptable:1 flavour:1 bound:12 tackled:1 refine:1 constraint:8 precisely:1 ri:3 colliding:1 software:1 bousquet:1 aspect:1 simulate:1 friction:1 min:5 optimality:2 relatively:1 structured:3 according:4 alternate:1 combination:1 belonging:1 smaller:1 slightly:1 wi:1 lp:3 making:1 ln:3 imate:1 describing:1 slack:1 discus:1 turn:7 singer:1 available:3 permit:1 multiplied:1 z10:1 occasional:1 appropriate:1 batch:3 robustness:1 rp:2 denotes:1 running:2 include:2 cf:2 top:1 a4:2 assumes:1 hinge:1 quinlan:1 exploit:1 practicality:3 especially:2 icra:1 lspi:23 move:3 added:3 quantity:1 primary:1 concentration:1 usual:2 bagnell:3 exhibit:1 distance:5 simulated:4 accommodates:1 pedal:5 assuming:1 kalman:1 insufficient:1 demonstration:41 balance:1 mostly:1 negative:1 proper:2 policy:54 unknown:1 perform:1 allowing:1 upper:9 observation:1 datasets:2 markov:1 benchmark:1 finite:3 january:1 team:1 rn:1 varied:1 reproducing:2 kinect:5 august:1 canada:4 pair:1 required:1 z1:1 learned:3 nip:3 address:2 beyond:1 able:4 usually:1 sparsity:1 program:3 including:1 max:6 suitable:1 critical:2 difficulty:2 regularized:4 turning:1 residual:1 advanced:1 improve:2 esaim:1 picture:2 axis:1 started:1 ready:1 coupled:1 prior:1 literature:2 l2:1 understanding:1 acknowledgement:1 discovery:1 interdependent:1 loss:1 par:1 suggestion:1 limitation:1 filtering:1 querying:2 approximator:1 agent:7 gather:1 consistent:1 pi:1 autonomy:1 maas:1 last:1 soon:1 guide:1 allow:1 side:1 munos:4 distributed:1 overcome:2 depth:1 feedback:1 world:1 transition:1 rich:2 forward:4 reinforcement:10 projected:1 far:2 functionals:1 approximate:10 supremum:1 overfitting:1 unnecessary:2 xi:38 imitation:2 postdoctoral:1 continuous:3 search:2 table:2 learn:1 obtaining:1 forest:2 expansion:1 complex:3 domain:1 aistats:2 main:1 linearly:2 noise:2 cvxr:1 nothing:1 repeated:1 aamas:1 qkn:2 differed:1 sub:11 position:7 governed:1 bumper:3 learns:2 theorem:4 down:3 minute:2 specific:1 symbol:1 explored:2 a3:2 dominates:1 effectively:1 illustrates:1 margin:7 horizon:4 entropy:1 generalizing:1 tantalizing:1 explore:2 failed:1 expressed:1 nserc:1 lstd:1 springer:1 identity:1 formulated:2 goal:17 acceleration:4 rbf:1 feasible:1 hard:1 change:3 specifically:1 uniformly:2 reducing:1 called:3 ijspeert:1 catastrophic:1 support:1 latter:1 accelerated:2 incorporate:1 evaluate:7 ex:2 |
4,330 | 4,919 | Distributed Exploration in Multi-Armed Bandits
Eshcar Hillel
Yahoo Labs, Haifa
[email protected]
Zohar Karnin
Yahoo Labs, Haifa
[email protected]
Tomer Koren?
Technion ? Israel Inst. of Technology
[email protected]
Ronny Lempel
Yahoo Labs, Haifa
[email protected]
Oren Somekh
Yahoo Labs, Haifa
[email protected]
Abstract
We study exploration in Multi-Armed Bandits in a setting where k players collaborate in order to identify an ?-optimal arm. Our motivation comes from recent
employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm
pulls required by each of the players, and the amount of communication between
them. In particular, our main result shows that
? by allowing the k players to communicate only once, they are able to learn k times faster than
? a single player.
That is, distributing learning to k players gives rise to a factor k parallel speedup. We complement this result with a lower bound showing this is in general the
best possible. On the other extreme, we present an algorithm that achieves the
ideal factor k speed-up in learning performance, with communication only logarithmic in 1/?.
1
Introduction
Over the past years, multi-armed bandit (MAB) algorithms have been employed in an increasing
amount of large-scale applications. MAB algorithms rank results of search engines [23, 24], choose
between stories or ads to showcase on web sites [2, 8], accelerate model selection and stochastic
optimization tasks [21, 22], and more. In many of these applications, the workload is simply too
high to be handled by a single processor. In the web context, for example, the sheer volume of user
requests and the high rate at which they arrive, require websites to use many front-end machines
that run in multiple data centers. In the case of model selection tasks, a single evaluation of a
certain model or configuration might require considerable computation time, so that distributing
the exploration process across several nodes may result with a significant gain in performance. In
this paper, we study such large-scale MAB problems in a distributed environment where learning is
performed by several independent nodes that may take actions and observe rewards in parallel.
Following recent MAB literature [14, 3, 15, 18], we focus on the problem of identifying a ?good?
bandit arm with high confidence. In this problem, we may repeatedly choose one arm (corresponding
to an action) and observe a reward drawn from a probability distribution associated with that arm.
Our goal is to find an arm with an (almost) optimal expected reward, with as few arm pulls as
possible (that is, minimize the simple regret [7]). Our objective is thus explorative in nature, and in
?
Most of this work was done while the author was at Yahoo Labs, Haifa.
1
particular we do not mind the incurred costs or the involved regret. This is indeed the natural goal in
many applications, such as in the case of model selection problems mentioned above. In our setup,
a distributed strategy is evaluated by the number of arm pulls per node required for the task, which
correlates with the parallel speed-up obtained by distributing the learning process.
We abstract a distributed MAB system as follows. In our model, there are k players that correspond
to k independent machines in a cluster. The players are presented with a set of arms, with a common
goal of identifying a good arm. Each player receives a stream of queries upon each it chooses an arm
to pull. This stream is usually regulated by some load balancer ensuring the load is roughly divided
evenly across players. To collaborate, the players may communicate with each other. We assume
that the bandwidth of the underlying network is limited, so that players cannot simply share every
piece of information. Also, communicating over the network might incur substantial latencies, so
players should refrain from doing so as much as possible. When measuring communication of a
certain multi-player protocol we consider the number of communication rounds it requires, where
in a round of communication each player broadcasts a single message (of arbitrary size) to all other
players. Round-based models are natural in distributed learning scenarios, where frameworks such
as MapReduce [11] are ubiquitous.
What is the tradeoff between the learning performance of the players, and the communication between them? At one extreme, if all players broadcast to each other each and every arm reward as
it is observed, they can simply simulate the decisions of a serial, optimal algorithm. However, the
communication load of this strategy is of course prohibitive. At the other extreme, if the players
never communicate, each will suffer the learning curve of a single player, thereby avoiding any possible speed-up the distributed system may provide. Our goal in this work is to better understand this
tradeoff between inter-player communication and learning performance.
Considering the high cost of communication, perhaps the simplest and most important question that
arises is how well can the players learn while keeping communication to the very minimum. More
specifically, is there a non-trivial strategy by which the players can identify a ?good? arm while
communicating only once, at the end of the process? As we discuss later on, this is a non-trivial
question. On?the positive side, we present a k-player algorithm that attains an asymptotic parallel
speed-up of k factor, as compared to the conventional, serial setting. In fact, our approach demonstrates how to convert virtually any serial exploration strategy to a distributed algorithm enjoying
such speed-up. Ideally, one could hope for a factor k speed-up in learning performance;
? however,
we show a lower bound on the required number of pulls in this case, implying that our k speed-up
is essentially optimal.
At the other end of the trade-off, we investigate how much communication is necessary for obtaining
the ideal factor k parallel speed-up. We present a k-player strategy achieving such speed-up, with
communication only logarithmic in 1/?. As a corollary, we derive an algorithm that demonstrates an
explicit trade-off between the number of arm pulls and the amount of inter-player communication.
1.1
Related Work
Recently there has been an increasing interest in distributed and collaborative learning problems.
In the MAB literature, several recent works consider multi-player MAB scenarios in which players
actually compete with each other, either on arm-pulls resources [15] or on the rewards received [19].
In contrast, we study a collaborative multi-player problem and investigate how sharing observations
helps players achieve their common goal. The related work of Kanade et al. [17] in the context
of non-stochastic (i.e. adversarial) experts also deals with a collaborative problem in a similar distributed setup, and examine the trade-off between communication and the cumulative regret.
Another line of recent work was focused on distributed stochastic optimization [13, 1, 12] and distributed PAC models [6, 10, 9], investigating the involved communication trade-offs. The techniques
developed there, however, are inherently ?batch? learning methods and thus are not directly applicable to our MAB problem which is online in nature. Questions involving network topology [13, 12]
and delays [1] are relevant to our setup as well; however, our present work focuses on establishing
the first non-trivial guarantees in a distributed collaborative MAB setting.
2
2
Problem Setup and Statement of Results
In our model of the Distributed Multi-Armed Bandit problem, there are k ? 1 individual players.
The players are given n arms, enumerated by [n] := {1, 2, . . . , n}. Each arm i ? [n] is associated
with a reward, which is a [0, 1]-valued random variable with expectation pi . For convenience, we
assume that the arms are ordered by their expected rewards, that is p1 ? p2 ? ? ? ? ? pn . At every
time step t = 1, 2, . . . , T , each player pulls one arm of his choice and observes an independent
sample of its reward. Each player may choose any of the arms, regardless of the other players and
their actions. At the end of the game, each player must commit to a single arm. In a communication
round, that may take place at any predefined time step, each player may broadcast a message to all
other players. While we do not restrict the size of each message, in a reasonable implementation a
?
message should not be larger than O(n)
bits.
In the best-arm identification version of the problem, the goal of a multi-player algorithm given some
target confidence level ? > 0, is that with probability at least 1 ? ? all players correctly identify the
best arm (i.e. the arm having the maximal expected reward). For simplicity, we assume in this setting
that the best arm is unique. Similarly, in the (?, ?)-PAC variant the goal is that each player finds an
?-optimal (or ??-best?) arm, that is an arm i with pi ? p1 ? ?, with high probability. In this paper we
focus on the more general (?, ?)-PAC setup, which also includes best-arm identification for ? = 0.
We use the notation ?i := p1 ? pi to denote the suboptimality gap of arm i, and occasionally
use ?? := ?2 for denoting the minimal gap. In the best-arm version of the problem, where we
assume that the best arm is unique, we have ?i > 0 for all i > 1. When dealing with the (?, ?)-PAC
setup, we also consider the truncated gaps ??i := max{?i , ?}. In the context of MAB problems, we
are interested in deriving distribution-dependent bounds, namely, bounds that are stated as a function
? notation in our bounds
of ?, ? and also the distribution-specific values ? := (?2 , . . . , ?n ). The O
hides polylogarithmic factors in n, k, ?, ?, and also in ?2 , . . . , ?n . In the case of serial exploration
algorithms (i.e., when there is only one player), the lower bounds of Mannor and Tsitsiklis [20] and
? ? ) pulls are necessary for identifying an ?-arm, where
Audibert et al. [3] show that in general ?(H
H? :=
n
X
i=2
1
.
(??i )2
(1)
Intuitively, the hardness of the task is therefore captured by the quantity H? , which is roughly the
number of arm pulls needed to find an ?-best arm with a reasonable probability; see also [3] for a
discussion. Our goal in this work is therefore to establish bounds in the distributed model that are
expressed as a function of H? , in the same vein of the bounds known in the classic MAB setup.
2.1
Baseline approaches
We now discuss several baseline approaches for the problem, starting with our main focus?the single round setting. The first obvious approach, already mentioned earlier, is the no-communication
strategy: just let each player explore the arms in isolation of the other players, following an independent instance of some serial strategy; at the end of the executions, all players hold an ?-best arm.
? ? ) pulls per
Clearly, this approach performs poorly in terms of learning performance, needing ?(H
player in the worst case and not leading to any parallel speed-up.
Another straightforward approach is to employ a majority vote among the players: let each player
independently identify an arm, and choose the arm having most of the votes (alternatively, at least
half of the votes). However, this approach does not lead to any improvement in performance: for
this vote to work, each player has to solve the problem correctly with reasonable probability, which
? ? ) pulls of each. Even if we somehow split the arms between players and let
already require ?(H
each player explore a share of them, a majority vote would still fail since those players getting the
? ? ) times?a small MAB instance might be as hard as the
?good? arms might have to pull arms ?(H
full-sized problem (in terms of the complexity measure H? ).
When considering algorithms employing multiple communication rounds, we use an ideal simulated
serial algorithm (i.e., a full-communication approach) as our baseline. This approach is of course
prohibited in our context, but is able to achieve the optimal parallel speed-up, linear in the number
of players k.
3
2.2
Our results
We now discuss our approach and overview our algorithmic results. These are summarized in Table 1
below, that compares the different algorithms in terms of parallel speed-up and communication.
Our approach for the one-round case is based on the idea of majority vote. For the best-arm
identifi?
cation task, our observation is that by letting each player
explore
a
smaller
set
of
n/
k
arms
chosen
?
at random and choose one of them as ?best?, about k of the players would come up with the global
best arm. This (partial) consensus on a single arm is a key aspect in our approach, since it allows the
players to identify the correct best arm among
? the votes of all k players, after sharing information
only once. Our approach leads to a factor k parallel speed-up which, as we demonstrate in our
lower bound, is the optimal factor in this setting. Although our goal here is pure exploration, in our
algorithms each player follows an explore-exploit strategy. The idea is that a player should sample
his recommended arm as much as his budget permits, even if it was easy to identify in his smallsized problem. This way we can guarantee that the top arms are sampled to a sufficient precision by
the time each of the players has to choose a single best arm.
The algorithm for the (?, ?)-PAC setup is similar, but its analysis is more challenging. As mentioned
above, an agreement on a single arm is essential for a vote to work. Here, however, there might
be several ?-best arms, so arriving at a consensus on a single one is more difficult. Nonetheless,
by examining two different regimes, namely when there are ?many? ?-best arms
? and when there
are ?few? of them, our analysis shows that a vote can still work and achieve the k multiplicative
speed-up.
In the case of multiple communication rounds, we present a distributed elimination-based algorithm
that discards arms right after each communication round. Between rounds, we share the work load
between players uniformly. We show that the number of such rounds can be reduced to as low
as O(log(1/?)), by eliminating all 2?r -suboptimal arms in the r?th round. A similar idea was
employed in [4] for improving the regret bound of UCB with respect to the parameters ?i . We also
use this technique to develop an algorithm that performs only R communication rounds, for any
given parameter R ? 1, that achieves a slightly worse multiplicative ?2/R k speed-up.
S ETTING
O NE -ROUND
M ULTI -ROUND
A LGORITHM
No-Communication
Majority Vote
Algorithm 1,2
Serial (simulated)
Algorithm 3
S PEED - UP
1
1
?
k
k
k
?2/R ? k
Algorithm 3?
C OMMUNICATION
none
1 round
1 round
every time step
O(log 1? ) rounds
R rounds
Table 1: Summary of baseline approaches and our results. The speed-up results are asymptotic
(logarithmic factors are omitted).
3
One Communication Round
This section considers the most basic variant of the multi-player MAB problem, where each player
is only allowed a single transmission, when finishing her queries. For the clarity of exposition, we
first consider the best-arm identification setting in Section 3.1. Section 3.2 deals with the (?, ?)-PAC
setup. We demonstrate the tightness of our result in Section 3.3 with a lower bound for the required
budget of arm pulls in this setting.
Our algorithms in this section assume the availability of a serial algorithm A(A, ?), that given a set
of arms A and target accuracy ?, identifies an ?-best arm in A with probability at least 2/3 using no
more than
X 1
|A|
cA
log ?
(2)
(??i )2
?i
i?A
4
arm pulls, for some constant cA > 1. For example, the Successive Elimination algorithm [14] and
the Exp-Gap Elimination algorithm [18] provide a guarantee of this form. Essentially, any exploration strategy whose guarantee is expressed as a function of H? can be used as the procedure A,
with technical modifications in our analysis.
3.1
Best-arm Identification Algorithm
We now describe our one-round best-arm identification algorithm. For simplicity, we present a
version matching ? = 1/3, meaning that the algorithm produces the correct arm with probability at
least 2/3; we later explain how to extend it to deal with arbitrary values of ?.
Our algorithm is akin to a majority vote among the multiple players, in which each player pulls
arms in two stages. In the first E XPLORE stage, each player independently solves a ?smaller? MAB
instance on a random subset of the arms using the exploration strategy A. In the second E XPLOIT
stage, each player exploits the arm identified as ?best? in the first stage, and communicates that arm
and its observed average reward. See Algorithm 1 below for a precise description. An appealing
feature of our algorithm is that it requires each player to transmit a single message of constant
size (up to logarithmic factors).
In Theorem 3.1 we prove that Algorithm 1 indeed achieves the promised upper bound.
Algorithm 1 O NE - ROUND B EST- ARM
Theorem 3.1. Algorithm 1 identifies the best input time horizon T
arm correctly with probability at least 2/3 us- output an arm
ing no more than
1: for player j = 1 to k do
?
!
2:
choose a subset Aj of 6n/ k arms unin
X
n
1
1
formly at random
log
O ? ?
2
?i
k i=2 ?i
3:
E XPLORE: execute ij ? A(Aj , 0) using
?
at most 12 T pulls (and halting the algorithm
arm pulls per player, provided that 6 ? k ?
early if necessary);
n. The algorithm uses a single communicaif the algorithm fails to identify any arm or
tion round, in which each player communicates
does not terminate gracefully, let ij be an
?
O(1)
bits.
arbitrary arm
4:
E XPLOIT: pull arm ij for 12 T times and
By repeating the algorithm O(log(1/?)) times
let q?j be its average reward
and taking the majority vote of the independent
5:
communicate the numbers ij , q?j
runs, we can amplify the success probability to
6: end for
1 ? ? for any given ? > 0. Note that we can
7: let ki be the number of players j with ij = i,
?
still do that with one communication round (at
and define A =P
{i : ki > k}
the end of all executions), but each player now
8: let p?i = (1/ki ) {j : ij =i} q?j for all i
has to communicate O(log(1/?)) values1 .
9: return arg maxi?A p?i ; if the set A is empty,
Theorem 3.2. There exists a k-player
output an arbitrary arm.
al? ?1 Pn 1/?2 arm
gorithm that given O
i
i=2
k
pulls, identifies the best arm correctly with probability at least 1 ? ?. The algorithm uses a single communication round, in which each player communicates O(log(1/?)) numerical values.
We now prove Theorem 3.1. We show that a budget T of samples (arm pulls) per player, where
n
24cA X 1
n
T ? ? ?
ln
,
2
?i
k i=2 ?i
(3)
suffices for the players to jointly identify the best arm i? with the desired probability. Clearly, this
would imply the bound stated in Theorem 3.1. We note that we did not try to optimize the constants
in the above expression.
We begin by analyzing the E XPLORE phase of the algorithm. Our first lemma shows that each player
chooses the global best arm and identifies it as the local best arm with sufficiently large probability.
p
?
In fact, by letting each player pick a slightly larger subset of O( log(1/?) ? n/ k) arms, we can amplify
the success probability to 1 ? ? without needing to communicate more than 2 values per player. However, this
approach only works when k = ?(log(1/?)).
1
5
Lemma 3.3. When (3) holds, each player
? identifies the (global) best arm correctly after the E X PLORE phase with probability at least 2/ k.
We next address the E XPLOIT phase. The next simple lemma shows that the popular arms (i.e. those
selected by many players) are estimated to a sufficient precision.
Lemma 3.4. Provided that (3) holds, we have |?
pi ? pi | ? 12 ?? for all arms i ? A with probability
at least 5/6.
Due to lack of space, the proofs of the above lemmas are omitted and can be found in [16]. We can
now prove Theorem 3.1.
Proof (of Theorem 3.1). Let us first show that with probability at least 5/6, the best arm i is contained in the set A. To this end, notice that ki? is the sum of k i.i.d. Bernoulli random variables {Ij }j where Ij is the indicator of whether?player j chooses arm i? after the E XPLORE
phase. By?Lemma 3.3 we have that E[I
? j ] ? 2/ k for all j, hence by Hoeffding?s inequality,
Pr[ki? ? k] ? Pr[ki? ? E[ki? ] ? ? k] ? exp(?2k/k) ? 1/6 which implies that i? ? A with
probability at least 5/6.
Next, note that with probability at least 5/6 the arm i ? A having the highest empirical reward p?i
is the one with the highest expected reward pi . Indeed, this follows directly from Lemma 3.4 that
shows that with probability at least 5/6, for all arms i ? A the estimate p?i is within 21 ? of the true
bias pi . Hence, via a union bound we conclude that with probability at least 2/3, the best arm is in
A and has the highest empirical reward. In other words, with probability at least 2/3 the algorithm
outputs the best arm i? .
3.2
(?, ?)-PAC Algorithm
We now present an algorithm whose purpose is to recover an ?-optimal arm. Here, there might be
more than one ?-best arm, so each ?successful? player might come up with a different ?-best arm.
Nevertheless, our analysis below shows that with high probability, a subset of the players can still
agree on a single ?-best arm, which makes it possible to identify it among the votes of all players.
Our algorithm is described in Algorithm 2, and the following theorem states its guarantees.
Theorem 3.5. Algorithm 2 identifies a 2?-best arm with probability at least 2/3 using no more than
!
n
1 X 1
n
O ? ?
log ?
?i
k i=2 (??i )2
?
arm pulls per player, provided that 24 ? k ? n. The algorithm uses a single communication
?
round, in which each player communicates O(1)
bits.
Before proving the theorem, we first state several key lemmas. In the following, let n? and n2?
denote the number?of ?-best and 2?-best
arms respectively. Our analysis considers two different
?
1
1
regimes: n2? ? 50
k and n2? > 50
k, and shows that in any case,
T ?
n
400cA X 1
24n
?
ln ?
?
2
?i
k i=2 (?i )
(4)
suffices for identifying a 2?-best arm with the desired probability. Clearly, this implies the bound
stated in Theorem 3.5.
The first lemma shows that at least one of the players is able to find an ?-best arm. As we later show,
this is sufficient for the success of the algorithm in case there are many 2?-best arms.
Lemma 3.6. When (4) holds, at least one player successfully identifies an ?-best arm in the E X PLORE phase, with probability at least 5/6.
The next lemma is more refined and states that in case there are few 2?-best arms, the probability of
each player to successfully identify an ?-best arm grows linearly with n? .
?
1
Lemma 3.7. Assume that n2? ? 50
k. When (4) holds, each player identifies an ?-best arm in the
?
E XPLORE phase, with probability at least 2n? / k.
6
The last lemma we need analyzes the accuracy
Algorithm 2 O NE - ROUND ?- ARM
of the estimated rewards of arms in the set A.
input time horizon T , accuracy ?
Lemma 3.8. With probability at least 5/6, we output an arm
have |?
pi ? pi | ? ?/2 for all arms i ? A.
1: for player j = 1 to k do
?
2:
choose a subset Aj of 12n/ k arms uniFor the proofs of the above lemmas, refer to
formly at random
[16]. We now turn to prove Theorem 3.5.
3:
E XPLORE: execute ij ? A(Aj , ?) using
at
most 12 T pulls (and halting the algorithm
Proof. We shall prove that with probability 5/6
early if necessary);
the set A contains at least one ?-best arm. This
if the algorithm fails to identify any arm or
would complete the proof, since Lemma 3.8 asdoes not terminate gracefully, let ij be an
sures that with probability 5/6, the estimates p?i
arbitrary
arm
of all arms i ? A are at most ?/2-away from the
4:
E
XPLOIT
: pull arm ij for 12 T times, and
true reward pi , and in turn implies (via a union
let q?j be the average reward
bound) that with probability 2/3 the arm i ? A
communicate the numbers ij , q?j
having the maximal empirical reward p?i must 5:
6: end for
be a 2?-best arm.
7: let ki be the number of players j with ij = i
?
P
1
First, consider the case n2? > 50
k. Lemma
8: let ti = 12 ki T and p?i = (1/ki ) {j : ij =i} q?j
3.6 shows that with probability 5/6 there exists
for all i
a player j that identifies an ?-best arm ij . Since
9: define A = {i ? [n] : ti ? (1/?2 ) ln(12n)}
for at least n2? arms ?i ? 2?, we have
10: return arg maxi?A p?i ; if the set A is empty,
400 n2? ? 1 24n
output an arbitrary arm.
tij ? 12 T ? ? ?
ln
2?
2 k (2?)2
1
? 2 ln(12n) ,
?
that is, ij ? A.
?
1
Next, consider the case n2? ? 50
k. Let N denote the number of players that identified some
?-best arm. The random variable N is a sum of Bernoulli random variables {Ij }j ?
where Ij indicates whether player j identified some
?-best
arm.
By
Lemma
3.7,
E[I
]
?
2n
/
k and thus
?
? j
?
2
by Hoeffding?s inequality, Pr[N < n? ?k] = Pr[N ? E[N ] ? ?n? k] ? exp(?2n? ) ? 1/6 .
That is, with probability 5/6, at least n? k players found an ?-best arm. A pigeon-hole
argument
?
now shows that in this case there exists an ?-best arm i? selected by at least k players. Hence,
with probability
5/6 the number of samples of this arm collected in the E XPLOIT phase is at least
?
ti? ? kT /2 > (1/?2 ) ln(12n), which means that i? ? A.
3.3
Lower Bound
The following theorem suggests that ?
in general, for identifying the best arm k players achieve a
? k) when allowing one transmission per player (at the end of
multiplicative speed-up of at most O(
the game). Clearly, this also implies that a similar lower bound holds in the PAC setup, and proves
that our algorithmic results for the one-round case are essentially tight.
Theorem 3.9. For any k-player strategy that uses a single round of communication, there exist
rewards p1 , . . . , pn ? [0, 1] and integer T such that
?
? each individual player must use at least T / k arm pulls for them to collectively identify the
best arm with probability at least 2/3;
? ) pulls for identifying the best arm
? there exist a single-player algorithm that needs at most O(T
with probability at least 2/3.
The proof of the theorem is omitted due to space constraints and can be found in [16].
4
Multiple Communication Rounds
In this section we establish an explicit tradeoff between the performance of a multi-player algorithm
and the number of communication rounds it uses, in terms of the accuracy ?. Our observation is that
7
by allowing O(log(1/?)) rounds of communication, it is possible to achieve the optimal speedup of
factor k. That is, we do not gain any improvement in learning performance by allowing more than
O(log(1/?)) rounds.
Our algorithm is given in Algorithm 3. The
idea is to eliminate in each round r (i.e., right Algorithm 3 M ULTI -ROUND ?-A RM
after the rth communication round) all 2?r - input (?, ?)
suboptimal arms. We accomplish this by let- output an arm
1: initialize S0 ? [n], r ? 0, t0 ? 0
ting each player sample uniformly all remain2: repeat
ing arms and communicate the results to other
3:
set r ? r + 1
players. Then, players are able to eliminate
4:
let ?r ? 2?r , tr ? (2/k?2r ) ln(4nr2 /?)
suboptimal arms with high confidence. If
for player j = 1 to k do
each such round is successful, after log2 (1/?) 5:
6:
sample each arm i ? Sr?1 for tr ? tr?1
rounds only ?-best arms survive. Theorem 4.1
times
below bounds the number of arm pulls used by
7:
let p?rj,i be the average reward of arm i (in
this algorithm (a proof can be found in [16]).
all rounds so far of player j)
Theorem 4.1. With probability at least 1 ? ?,
8:
communicate
the numbers p?rj,1 , . . . , p?rj,n
Algorithm 3
9:
end for
Pk
? identifies the optimal arm using
10:
let p?ri = (1/k) j=1 p?rj,i for all i ? Sr?1 ,
!
and let p?r? = maxi?Sr?1 p?ri
n
1 X 1
n
1
11:
set Sr ? Sr?1 \{i ? Sr?1 : p?ri < p?r? ??r }
O
?
log
log ?
k i=2 (??i )2
?
?i
12: until ?r ? ?/2 or |Sr | = 1
13: return an arm from Sr
arm pulls per player;
? terminates after at most 1 + dlog2 (1/?)e rounds of communication (or after 1 + dlog2 (1/?? )e
rounds for ? = 0).
By properly tuning the elimination thresholds ?r of Algorithm 3 in accordance with the target accuracy ?, we can establish an explicit trade-off between the number of communication rounds and the
number of arm pulls each player needs. In particular, we can design a multi-player algorithm that
terminates after at most R communication rounds, for any given parameter R > 0. This, however,
comes at the cost of a compromise in learning performance as quantified in the following corollary.
Corollary 4.2. Given a parameter R > 0, set ?r ? ?r/R for all r ? 1 in Algorithm 3. With probability at least 1 ? ?, the modified algorithm
? ?2/R /k) ? Pn (1/?? )2 ) arm pulls per player;
? identifies an ?-best arm using O((?
i
i=2
? terminates after at most R rounds of communication.
5
Conclusions and Further Research
We have considered a collaborative MAB exploration problem, in which several independent players explore a set of arms with a common goal, and obtained the first non-trivial results in such
setting. Our main results apply for the specifically interesting regime where each of the players is
allowed a single transmission; this setting fits naturally to common distributed frameworks such as
MapReduce. An interesting open question in this context is whether one can obtain a strictly better
speed-up result (which, in particular, is independent of ?) by allowing more than?a single round.
Even when allowing merely two communication rounds, it is unclear whether the k speed-up can
be improved. Intuitively, the difficulty here is that in the second phase of a reasonable strategy each
player should focus on the arms that excelled in the first phase; this makes the sub-problems being
faced in the second phase as hard as the entire MAB instance, in terms of the quantity H? . Nevertheless, we expect our one-round approach to serve as a building-block in the design of future
distributed exploration algorithms, that are applicable in more complex communication models.
An additional interesting problem for future research is how to translate our results to the regret
minimization setting. In particular, it would be nice to see a conversion of algorithms like UCB [5]
to a distributed setting. In this respect, perhaps a more natural distributed model is a one resembling
that of Kanade et al. [17], that have established a regret vs. communication trade-off in the nonstochastic setting.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In NIPS, pages
873?881, 2011.
[2] D. Agarwal, B.-C. Chen, P. Elango, N. Motgi, S.-T. Park, R. Ramakrishnan, S. Roy, and
J. Zachariah. Online models for content optimization. In NIPS, pages 17?24, December 2008.
[3] J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In
COLT, pages 41?53, 2010.
[4] P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed
bandit problem. Periodica Mathematica Hungarica, 61(1-2):55?65, 2010.
[5] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235?256, 2002.
[6] M. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity
and privacy. Arxiv preprint arXiv:1204.3514, 2012.
[7] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In
Algorithmic Learning Theory, pages 23?37. Springer, 2009.
[8] D. Chakrabarti, R. Kumar, F. Radlinski, and E. Upfal. Mortal multi-armed bandits. In NIPS,
pages 273?280, 2008.
[9] H. Daum?e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Efficient protocols for
distributed classification and optimization. In ALT, 2012.
[10] H. Daum?e III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Protocols for learning
classifiers on distributed data. AISTAT, 2012.
[11] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Commun.
ACM, 51(1):107?113, Jan. 2008.
[12] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction
using mini-batches. Journal of Machine Learning Research, 13:165?202, 2012.
[13] J. Duchi, A. Agarwal, and M. J. Wainwright. Distributed dual averaging in networks. NIPS,
23, 2010.
[14] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problems. The Journal of Machine Learning
Research, 7:1079?1105, 2006.
[15] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification.
NIPS, 2011.
[16] E. Hillel, Z. Karnin, T. Koren, R. Lempel, and O. Somekh. Distributed exploration in multiarmed bandits. arXiv preprint arXiv:1311.0800, 2013.
[17] V. Kanade, Z. Liu, and B. Radunovic. Distributed non-stochastic experts. In Advances in
Neural Information Processing Systems 25, pages 260?268, 2012.
[18] Z. Karnin, T. Koren, and O. Somekh. Almost optimal exploration in multi-armed bandits. In
Proceedings of the 30th International Conference on Machine Learning, 2013.
[19] K. Liu and Q. Zhao. Distributed learning in multi-armed bandit with multiple players. IEEE
Transactions on Signal Processing, 58(11):5667?5681, Nov. 2010.
[20] S. Mannor and J. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit
problem. The Journal of Machine Learning Research, 5:623?648, 2004.
[21] O. Maron and A. W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In NIPS, 1994.
[22] V. Mnih, C. Szepesv?ari, and J.-Y. Audibert. Empirical bernstein stopping. In ICML, pages
672?679. ACM, 2008.
[23] F. Radlinski, M. Kurup, and T. Joachims. How does clickthrough data reflect retrieval quality?
In CIKM, pages 43?52, October 2008.
[24] Y. Yue and T. Joachims. Interactively optimizing information retrieval systems as a dueling
bandits problem. In ICML, page 151, June 2009.
9
| 4919 |@word version:3 eliminating:1 dekel:1 open:1 pick:1 thereby:1 tr:3 venkatasubramanian:2 configuration:1 contains:1 liu:2 denoting:1 past:1 com:4 must:3 explorative:1 numerical:1 v:1 implying:1 half:1 prohibitive:1 website:1 selected:2 mannor:3 node:3 revisited:1 successive:1 elango:1 chakrabarti:1 prove:5 privacy:1 inter:2 indeed:3 hardness:1 expected:4 p1:4 examine:1 roughly:2 multi:20 excelled:1 armed:12 considering:2 increasing:2 provided:3 begin:1 underlying:1 notation:2 israel:1 what:1 developed:1 guarantee:5 every:4 ti:3 demonstrates:2 rm:1 classifier:1 positive:1 before:1 local:1 accordance:1 analyzing:1 establishing:1 might:7 quantified:1 suggests:1 challenging:1 limited:1 unique:2 union:2 regret:7 block:1 procedure:1 jan:1 empirical:4 matching:1 confidence:3 word:1 cannot:1 convenience:1 selection:4 amplify:2 ronny:1 context:5 optimize:1 conventional:1 dean:1 center:1 resembling:1 straightforward:1 regardless:1 starting:1 independently:2 focused:1 bachrach:1 simplicity:2 identifying:6 pure:2 communicating:2 deriving:1 pull:30 his:4 classic:1 proving:1 transmit:1 target:3 shamir:1 user:1 us:5 agreement:1 roy:1 showcase:1 gorithm:1 identifi:1 vein:1 observed:2 preprint:2 worst:1 trade:6 highest:3 motgi:1 observes:1 mentioned:3 substantial:1 environment:1 complexity:3 reward:20 ideally:1 employment:1 ghavamzadeh:1 tight:1 compromise:1 incur:1 serve:1 upon:1 accelerate:1 workload:1 describe:1 query:2 hillel:2 refined:1 whose:2 larger:2 valued:1 solve:1 tightness:1 fischer:1 commit:1 jointly:1 online:3 peed:1 maximal:2 relevant:1 translate:1 poorly:1 achieve:5 description:1 aistat:1 getting:1 cluster:2 transmission:3 empty:2 produce:1 help:1 derive:1 develop:1 ac:1 ij:18 received:1 solves:1 p2:1 come:4 implies:4 correct:2 stochastic:6 exploration:14 elimination:5 require:3 suffices:2 mab:16 enumerated:1 strictly:1 hold:6 sufficiently:1 considered:1 prohibited:1 exp:3 algorithmic:3 achieves:3 early:2 omitted:3 purpose:1 applicable:2 successfully:2 hope:1 minimization:1 offs:1 clearly:4 modified:1 pn:4 corollary:3 focus:5 finishing:1 joachim:2 improvement:2 properly:1 rank:1 bernoulli:2 indicates:1 june:1 contrast:1 adversarial:1 attains:1 baseline:4 inst:1 dependent:1 stopping:2 eliminate:2 entire:1 her:1 bandit:18 interested:1 arg:2 among:4 colt:1 classification:2 dual:1 yahoo:9 initialize:1 once:3 karnin:3 never:1 having:4 park:1 survive:1 icml:2 future:2 few:3 employ:1 ortner:1 saha:2 individual:2 delayed:1 phase:10 interest:1 message:5 investigate:2 xplore:6 mnih:1 evaluation:1 extreme:3 predefined:1 kt:1 partial:1 necessary:4 stoltz:1 enjoying:1 periodica:1 haifa:5 desired:2 minimal:1 instance:4 earlier:1 measuring:1 cost:3 nr2:1 subset:5 technion:2 delay:1 examining:1 successful:2 front:1 too:1 balancer:1 accomplish:1 chooses:3 international:1 off:5 gabillon:1 reflect:1 cesa:1 interactively:1 choose:8 broadcast:3 hoeffding:3 worse:1 expert:2 zhao:1 leading:1 return:3 halting:2 summarized:1 includes:1 availability:1 inc:4 race:1 audibert:3 ad:1 stream:2 multiplicative:3 try:1 piece:1 performed:1 lab:5 doing:1 tion:1 later:3 recover:1 parallel:9 collaborative:5 minimize:1 il:1 accuracy:5 correspond:1 identify:12 identification:7 none:1 processor:1 cation:1 explain:1 sharing:2 nonetheless:1 mathematica:1 involved:2 obvious:1 naturally:1 associated:2 proof:7 gain:2 sampled:1 popular:1 ubiquitous:1 actually:1 auer:2 improved:2 done:1 evaluated:1 execute:2 zachariah:1 just:1 stage:4 until:1 receives:1 web:2 lack:1 somehow:1 maron:1 aj:4 perhaps:2 quality:1 grows:1 building:1 lgorithm:1 true:2 hence:3 moore:1 deal:3 round:47 game:2 ulti:2 suboptimality:1 complete:1 demonstrate:3 performs:2 duchi:2 balcan:1 meaning:1 recently:1 ari:1 common:4 kurup:1 overview:1 volume:1 extend:1 rth:1 significant:1 refer:1 multiarmed:2 phillips:2 tuning:1 collaborate:2 unifor:1 similarly:1 recent:4 hide:1 optimizing:1 commun:1 discard:1 scenario:2 occasionally:1 certain:2 inequality:2 refrain:1 success:3 captured:1 minimum:1 analyzes:1 additional:1 employed:2 recommended:1 signal:1 multiple:6 full:2 needing:2 rj:4 ing:2 technical:1 faster:1 retrieval:2 divided:1 serial:8 ensuring:1 prediction:1 involving:1 variant:2 basic:1 essentially:3 expectation:1 arxiv:4 agarwal:3 oren:1 gilad:1 szepesv:1 fine:1 sr:8 yue:1 virtually:1 december:1 integer:1 values1:1 ideal:3 bernstein:1 split:1 easy:1 iii:2 isolation:1 fit:1 nonstochastic:1 bandwidth:1 topology:1 restrict:1 suboptimal:3 idea:4 identified:3 tradeoff:4 intensive:1 t0:1 whether:4 expression:1 handled:1 distributing:3 accelerating:1 akin:1 suffer:1 action:4 repeatedly:1 dar:1 tij:1 latency:1 amount:3 repeating:1 simplest:1 reduced:1 exist:2 notice:1 estimated:2 lazaric:1 per:9 correctly:5 cikm:1 shall:1 key:2 sheer:1 nevertheless:2 promised:1 achieving:1 drawn:1 threshold:1 clarity:1 blum:1 merely:1 year:1 convert:1 sum:2 run:2 compete:1 communicate:9 arrive:1 almost:2 place:1 reasonable:4 decision:1 bit:3 bound:20 ki:10 koren:3 constraint:1 ri:3 aspect:1 speed:19 simulate:1 argument:1 kumar:1 speedup:2 request:1 across:2 smaller:2 slightly:2 terminates:3 appealing:1 modification:1 intuitively:2 pr:4 computationally:1 resource:1 ln:7 agree:1 discus:3 turn:2 fail:1 needed:1 mind:1 letting:2 end:11 permit:1 apply:1 observe:2 away:1 lempel:2 batch:2 top:1 log2:1 daum:2 exploit:2 ting:1 prof:1 establish:3 objective:1 question:4 quantity:2 already:2 strategy:12 unclear:1 regulated:1 simulated:2 majority:6 gracefully:2 evenly:1 considers:2 consensus:2 trivial:5 collected:1 zkarnin:1 mini:1 setup:10 difficult:1 october:1 statement:1 sures:1 stated:3 rise:1 implementation:1 design:2 clickthrough:1 allowing:6 upper:1 conversion:1 observation:3 bianchi:1 finite:1 truncated:1 communication:41 precise:1 mansour:2 arbitrary:6 tomer:1 complement:1 namely:2 required:4 engine:1 polylogarithmic:1 established:1 nip:6 zohar:1 able:4 address:1 usually:1 below:4 regime:3 max:1 wainwright:1 dueling:1 natural:3 difficulty:1 indicator:1 arm:147 technology:1 imply:1 ne:3 identifies:11 hungarica:1 faced:1 nice:1 literature:2 mapreduce:3 asymptotic:2 expect:1 interesting:3 upfal:1 incurred:1 sufficient:3 s0:1 xiao:1 story:1 share:3 pi:10 course:2 summary:1 repeat:1 last:1 keeping:1 arriving:1 tsitsiklis:2 side:1 bias:1 understand:1 taking:1 munos:2 distributed:28 curve:1 cumulative:1 author:1 reinforcement:1 simplified:1 employing:1 far:1 correlate:1 transaction:1 nov:1 dlog2:2 dealing:1 global:3 mortal:1 investigating:1 conclude:1 alternatively:1 search:2 table:2 kanade:3 learn:2 nature:2 terminate:2 ca:4 inherently:1 obtaining:1 somekh:3 improving:1 complex:1 protocol:3 did:1 pk:1 main:3 linearly:1 motivation:1 n2:8 allowed:2 site:1 precision:2 fails:2 sub:1 explicit:3 communicates:4 theorem:17 load:4 specific:1 showing:1 pac:8 ghemawat:1 maxi:3 alt:1 essential:1 exists:3 execution:2 budget:3 hole:1 horizon:2 gap:4 chen:1 logarithmic:4 pigeon:1 simply:3 explore:5 formly:2 bubeck:3 radunovic:1 expressed:2 ordered:1 contained:1 collectively:1 springer:1 tomerk:1 ramakrishnan:1 acm:2 goal:10 sized:1 exposition:1 considerable:1 hard:2 content:1 specifically:2 uniformly:2 averaging:1 lemma:18 player:117 vote:13 ucb:3 est:1 radlinski:2 arises:1 avoiding:1 |
4,331 | 492 | Fast Learning with Predictive Forward Models
Carlos Brody?
Dept. of Computer Science
lIMAS, UNAM
Mexico D.F. 01000
Mexico.
e-mail: carlos@hope. caltech. edu
Abstract
A method for transforming performance evaluation signals distal both in
space and time into proximal signals usable by supervised learning algorithms, presented in [Jordan & Jacobs 90], is examined. A simple observation concerning differentiation through models trained with redundant
inputs (as one of their networks is) explains a weakness in the original
architecture and suggests a modification: an internal world model that
encodes action-space exploration and, crucially, cancels input redundancy
to the forward model is added. Learning time on an example task, cartpole balancing, is thereby reduced about 50 to 100 times.
1
INTRODUCTION
In many learning control problems, the evaluation used to modify (and thus improve) control may not be available in terms of the controller's output: instead, it
may be in terms of a spatial transformation of the controller's output variables (in
which case we shall term it as being "distal in space"), or it may be available only
several time steps into the future (termed as being "distal in time"). For example,
control of a robot arm may be exerted in terms of joint angles, while evaluation may
be in terms of the endpoint cartesian coordinates; furthermore, we may only wish
to evaluate the endpoint coordinates reached after a certain period of time: the co?Current address: Computation and Neural Systems Program, California Institute of
Technology, Pasadena CA.
563
564
Brody
ordinates reached at the end of some motion, for instance. In such cases, supervised
learning methods are not directly applicable, and other techniques must be used.
Here we study one such technique (proposed for cases where the evaluation is distal
in both space and time by [Jordan & Jacobs 90)), analyse a source of its problems,
and propose a simple solution for them which leads to fast, efficient learning.
We first describe two methods, and then combine them into the "predictive forward
modeling" technique with which we are concerned.
1.1
FORWARD MODELING
"Forward Modeling" [Jordan & Rumelhart 90] is useful for dealing with evaluations
which are distal in space; it involves the construction of a differentiable model to
approximate the controller-action -+ evaluation transformation. Let our controller
have internal parameters w, output c, and be evaluated in space e, where e e(c)
is an unknown but well-defined transformation. If there is a desired output in
space e, called e*, we can write an "error" function, that is, an evaluation we wish
minimised, and differentiate it w.r.t. the controller's weights to obtain
=
E
= (e* -
e)2
8E
8w
8c
8e 8E
= 8w . 8c . 8e
(1)
Using a differentiable controller allows us to obtain the first factor in the second
equation, and the third factor is also known; but the second factor is not. However,
if we construct a differentiable model (called a ''forward model") of e(c), then we
can obtain an approximation to the second term by differentiating the model, and
use this to obtain an estimate of the gradient 8E / 8w through equation (1); this
can then be used for comparatively fast minimisation of E, and is what is known
as "forward modeling".
1.2
PREDICTIVE CRITICS
To deal with evaluations which are distal in time, we may use a "critic" network, as
in [Barto, Sutton & Anderson 83]. For a particular control policy implemented by
the controller network, the critic is trained to predict the final evaluation that will
be obtained given the current state - using, for example, Sutton's TD algorithm
[Sutton 88]. The estimated final evaluation is then available as soon as we enter a
state, and so may in turn be used to improve the control policy. This approach is
closely related to dynamic programming [Barto, Sutton & Watkins 89].
1.3
PREDICTIVE FORWARD MODELS
While the estimated evaluation we obtain from the critic is no longer distal in time,
it may still be distal in space. A natural proposal in such cases, where the evaluation
signal is distal both in space and time, is to combine the two techniques described
above: use a differentiable model as a predictive critic [Jordan & Jacobs 90]. If
we know the desired final evaluation, we can then proceed as in equation (1) and
obtain the gradient of the error w.r.t. the controller's weights. Schematically, this
would look like figure 1. When using a backprop network for the predictive model,
Fast Learning with Predictive Forward Models
state
vector
predicted
evaluation
control
CONTROLLER
NETWORK
PREDICTIVE
MODEL
Figure 1: Jordan and Jacobs' predictive forward modeling architecture. Solid lines indicate data paths, the dashed line indicates back propagation.
we would backpropagate through it, through it's control input, and then into the
controller to modify the controller network. We should note that since predictions
make no sense without a particular control policy, and the controller is only modified
through the predictive model, both networks must be trained simultaneously.
[Jordan & Jacobs 90] applied this method to a well-known problem, that of learning to balance an inverted pendulum on a movable cart by exerting appropriate
horizontal forces on the cart. The same task, without differentiating the critic, was
studied in [Barto, Sutton & Anderson 83]. There, reinforcement learning methods
were used instead to modify the controller's weights; these perform a search which
in some cases may be shown to follow, on average, the gradient of the expected
evaluation w.r .t. the network weights. Since differentiating the critic allows this
gradient to be found directly, one would expect much faster learning when using
the architecture of figure 1. However, Jordan and Jacobs' results show precisely the
opposite: it is surprisingly slow.
2
THE REDUNDANCY PROBLEM
We can explain the above surprising result if we consider the fact that the predictive
model network has redundant inputs: the control vector c is a function of the state
vector; (call this c
17( S)). Let K. and u be the number of components of the
control and state vectors, respectively. Instead of drawing its inputs from the entire
volume of (K.+u)-dimensional input space, the predictor is trained only with inputs
which lie on the u-dimensional manifold defined by the relation 17. A way from the
manifold the network is free to produce entirely arbitrary outputs. Differentiation
of the model will then provide non-arbitrary gradients only for directions tangential
to the manifold; this is a condition that the axes of the control dimensions will
not, in general, satisfy.l This observation, which concerns any model trained with
redundant inputs, is the very simple yet principal point of this paper.
=
One may argue that since the control policy is continually changing, the redundancy
picture sketched out here is not in fact accurate: as the controller is modified, many
lNote that if it is single-valued, there is no way the manifold can "fold around" to cover
all (or most) of the K. + (T input space.
565
566
Brody
EVALUATION
EVALUATION FUNCTION
MODELS
( ( lC\..D _____~
?
b
c
d
CONTROL OUTPUT
Figure 2: The evaluation as a function of control action. Curves A,B,C,D represent
possible (wrong) estimates of the "real" curve made by the predictive model network.
possible control policies are "seen" by the predictor, so creating volume in input
space and leading to correct gradients obtained from the predictor. However, the
way in which this modification occurs is significant. An argument based on empirical
observations will be made to sustain this.
Consider the example shown in figure 2. The graph shows what the "real" evaluation
at some point in state space is, as a function of a component of the control action
taken at that pointj this function is what the predictive network should approximate.
Suppose the function implemented by the predictive network initially looks like the
curve which crosses the "real" evaluation function at point (a)j suppose also that the
current action taken also corresponds to point (a). Here we see a one-dimensional
example of the redundancy problem: though the prediction at this point is entirely
accurate, the gradient is not. If we wish to minimise the predicted evaluation, we
would change the action in the direction of point (b). Examples of point (a) will
no longer be presented to the predictive network, so it could quite plausibly modify
itself simply so as to look like the estimated evaluation curve "B" which is shown
crossing point (b) (a minimal change necessary to continue being correct). Again,
the gradient is wrong and minimising the prediction will change the action in the
same direction as before, perhaps to point (c)j then to (d), and so on. Eventually,
the prediction, though accurate, will have zero gradient, as in curve "D", and no
modifications will occur. In practice, we have observed networks "getting stuck"
in this fashion . Though the objective was to minimise the evaluation, the system
stops "learning" at a point far from optimal.
The problem may be solved, as Jordan and Jacobs did, by introducing noise in
the controller's output, thus breaking the redundancy. Unfortunately, this degrades
Fast Learning with Predictive Forward Models
..~[
vector
predicted
state --..
control
vector
- -- -
---- - - -
CONTROLLER
NETWORK
predicted
evaluation
--- ....... ---
INTERMEDIATE
(WORLD) MODEL
PREDICTIVE
MODEL
Figure 3: The proposed system architecture. Again, solid lines represent data paths while
the dashed line represents backpropagation (or differentiation).
signal quality and means that since we are predicting future evaluations, we wish to
predict the effects of future noise - a notoriously difficult objective. The predictive
network eventually outputs the evaluation's expectation value, but this can take a
long time.
3
3.1
USING AN INTERMEDIATE MODEL
AN EXTRA WORLD MODEL
Another way to solve the redundancy problem is through the use of what is here
called an "intermediate model": a model of the world the controller is interacting
with. That is, if 8(t) represents the state vector at time t, and c(t) the controller
output at time t, it is a model of the function 1 where 8(t + 1)
1(8(t), c(t)).
=
This model is used as represented schematically in figure 3. It helps in modularising
the learning task faced by the predictive model [Chrisley 90], but more interestingly,
it need not be trained simultaneously with the controller since its output does not
depend on future control policy. Hence, it can be trained separately, with examples
drawn from its entire (state x action) input space, providing gradient signals without
arbitrary components when differentiated. Once trained, we freeze the intermediate
model's weights and insert it into the system as in figure 3; we then proceed to train
the controller and predictive model as before. The predictive model will no longer
have redundant inputs when trained either, so it too will provide correct gradient
signals. Since all arbitrary components have been eliminated, the speedup expected
from using differentiable predictive models should now be obtainable. 2
3.2
AN EXAMPLE TASK
The intermediate model architecture was tested on the same example task as used
by Jordan and Jacobs, that of learning to balance a pole which is attached through
a hinge on its lower end to a movable cart. The control action is a real valued-force
2This same architecture was independently proposed in [Werbos 90], but without the
explanation as to why the intermediate model is necessary instead of merely desirable.
567
568
Brody
1100
1000
-----
I-
900
-
800
.....
.....
.....
...
-
700
600
0
II
~
-
500
H
400
300
200
100
o
.II!
o
LII
100
J
J
200
300
400
500
600
700
800
900
1000
L.arninq trial
Figure 4: The evolution of eight different learning networks, using the intermediate model.
applied to the cart; the evaluation signal is a "0" while the pole has not fallen over,
and the cart hasn't reached the edge of the finite-sized tracks it is allowed to move
on, a "I" when either of these events happens. A trial is then said to have failed,
and terminates. 3
We count the number of learning trials needed before a controller is able to keep the
pole balanced for a significant amount of a time (measured in simulated seconds).
Figure 4 shows the evolution of eight networks; most reach balancing solutions
within 100 to 300 faiulres. (These successful networks came from a batch of eleven:
the other three never reached solutions.) This is 50 to 100 times faster than without
the intermediate model, where 5000 to 30000 trials were needed to achieve similar
balancing times [Jordan & Jacobs 90].
We must now take into account the overhead needed to train the intermediate
model. This was done in 200 seconds of simulated time, while training the whole
system typically required some 400 seconds- the overhead is small compared to the
improvement achieved through the use of the intermediate model. However, off-line
training of the intermediate model requires an additional agency to organise the
selection and presentation of training examples. In the real world, we would either
need some device which could initialise the system at any point in state space, or we
would have to train through "flailing": applying random control actions, over many
trials, so as to eventually cover all possible states and actions. As the dimensionality
of the state representation rises for larger problems, intermediate model training will
become more difficult.
3The differential equations which were used as a model of this system may be found in
[Barto, Sutton & Anderson 83]. The parameters of the simulations were identical to those
used in [Jordan & Jacobs 90].
Fast Learning with Predictive Forward Models
3.3
REMARKS
We should note that the need for covering all state space is not merely due to
the requirement of training an intermediate model: dynamic-programming based
techniques such as the ones mentioned in this paper are guaranteed to lead us to an
optimal control solution only if we explore the entire state space during learning.
This is due to their generality, since no a priori structure of the state space is
assumed. It might be possible to interleave the training of the intermediate model
with the training of the controller and predictor networks, so as to achieve both
concurrently. High-dimensional problems will still be problematic, but not just due
to intermediate model training- the curse of dimensionality is not easily avoided!
4
CONCLUSIONS
If we differentiate through a model trained with redundant inputs, we eliminate
possible arbitrary components (which are due to the arbitrary mixing of the inputs
that the model may use) only if we differentiate tangentially along the manifold
defined by the relationship between the inputs. For the architecture presented in
[Jordan & Jacobs 90], this is problematic, since the axes of the control vector will
typically not be tangential to the manifold. Once we take this into account, it is
clear why the architecture was not as efficient as expected; and we can introduce
an "intermediate" world model to avoid the problems that it had.
Using the intermediate model allows us to correctly obtain (through backpropagation, or differentiation) a real-valued vector evaluation on the controller's output.
On the example task presented here, this led to a 50 to 100-foid increase in learning speed, and suggests a much better scaling-up performance and applicability to
real-world problems than simple reinforcement learning, where real-valued outputs
are not permitted, and vector control outputs would train very slowly.
Acknowledgements
Many thanks are due to Richard Rohwer, who supervised the beginning of this
project, and to M. I. Jordan and R. Jacobs, who answered questions enlighteningly;
thanks are also due to Dr F. Bracho at lIMAS, UNAM, who provided the environment for the project's conclusion. This work was supported by scholarships from
CON ACYT in Mexico and from Caltech in the U.S.
References
[Ackley 88] D. H. Ackley, "Associative Learning via Inhibitory Search", in
D. S. Touretzky, ed., Advances in Neural Information Processing Systems 1,
Morgan Kaufmann 1989
[Barto, Sutton & Anderson 83] A. G. Barto, R. S. Sutton, and C. W. Anderson,
"Neuronlike Adaptive Elements that can Solve Difficult Control Problems",
IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-13, No.5,
Sept/Oct. 1983
569
570
Brody
[Barto, Sutton & Watkins 89] A. G. Barto, R. S. Sutton, and C. J. C. H. Watkins,
"Learning and Sequential Decision Making", University of Massachusetts at
Amherst COINS Technical Report 89-95, September 1989
[Chrisley 90] R. L. Chrisley, "Cognitive Map Construction and Use: A Parallel Distributed Approach", in Touretzky, Elman, Sejnowski, and Hinton, eds., Connectionist Models: Proceedings of the 1990 Summer School, Morgan Kaufmann
1991,
[Jordan & Jacobs 90] M. I. Jordan and R. A. Jacobs, "Learning to Control an Unstable System with Forward Modeling", in D. S. Touretzky, ed., Advances in
Neural Information Processing Systems 2, Morgan Kaufmann 1990
[Jordan & Rumelhart 90] M. I. Jordan and D. E. Rumelhart, "Supervised learning
with a Distal Teacher" , preprint.
[Nguyen & Widrow 90] D. Nguyen and B. Widrow, ''The Truck Backer-Upper: An
Example of Self-Learning in Neural Networks", in Miller, Sutton and Werbos,
eds., Neural Networks for Control, MIT Press 1990
[Sutton 88] R. S. Sutton, "Learning to Predict by the Methods of Temporal Differences", Machine Learning 3: 9-44, 1988
[Werbos 90] P. Werbos, "Architectures for Reinforcement Learning", in Miller, Sutton and Werbos, eds., Neural Networks for Control, MIT Press 1990
| 492 |@word trial:5 interleave:1 simulation:1 crucially:1 jacob:14 thereby:1 solid:2 interestingly:1 current:3 surprising:1 yet:1 must:3 eleven:1 device:1 beginning:1 along:1 become:1 differential:1 combine:2 overhead:2 introduce:1 expected:3 elman:1 td:1 curse:1 project:2 provided:1 what:4 transformation:3 differentiation:4 temporal:1 wrong:2 control:27 continually:1 before:3 modify:4 sutton:14 path:2 backer:1 might:1 studied:1 examined:1 suggests:2 co:1 smc:1 practice:1 backpropagation:2 empirical:1 selection:1 applying:1 map:1 independently:1 initialise:1 coordinate:2 construction:2 lima:2 suppose:2 programming:2 crossing:1 rumelhart:3 element:1 werbos:5 observed:1 ackley:2 preprint:1 solved:1 balanced:1 mentioned:1 transforming:1 agency:1 environment:1 dynamic:2 trained:10 depend:1 predictive:23 easily:1 joint:1 represented:1 train:4 fast:6 describe:1 sejnowski:1 quite:1 larger:1 valued:4 solve:2 drawing:1 analyse:1 itself:1 final:3 associative:1 differentiate:3 differentiable:5 propose:1 mixing:1 achieve:2 getting:1 requirement:1 produce:1 help:1 widrow:2 measured:1 school:1 implemented:2 predicted:4 involves:1 indicate:1 direction:3 closely:1 correct:3 exploration:1 explains:1 backprop:1 insert:1 around:1 predict:3 applicable:1 hope:1 mit:2 concurrently:1 modified:2 avoid:1 barto:8 minimisation:1 ax:2 improvement:1 indicates:1 sense:1 entire:3 typically:2 eliminate:1 initially:1 pasadena:1 relation:1 sketched:1 priori:1 spatial:1 construct:1 exerted:1 once:2 never:1 eliminated:1 identical:1 represents:2 look:3 cancel:1 future:4 report:1 connectionist:1 richard:1 tangential:2 simultaneously:2 neuronlike:1 evaluation:28 weakness:1 accurate:3 unam:2 edge:1 necessary:2 cartpole:1 desired:2 minimal:1 instance:1 modeling:6 cover:2 applicability:1 introducing:1 pole:3 predictor:4 successful:1 too:1 teacher:1 proximal:1 thanks:2 amherst:1 chrisley:3 off:1 minimised:1 again:2 slowly:1 dr:1 cognitive:1 creating:1 lii:1 usable:1 leading:1 account:2 satisfy:1 hasn:1 pendulum:1 reached:4 carlos:2 parallel:1 tangentially:1 who:3 kaufmann:3 miller:2 fallen:1 notoriously:1 cybernetics:1 explain:1 reach:1 touretzky:3 ed:5 rohwer:1 con:1 stop:1 massachusetts:1 dimensionality:2 exerting:1 obtainable:1 back:1 supervised:4 follow:1 permitted:1 sustain:1 evaluated:1 though:3 done:1 anderson:5 furthermore:1 generality:1 just:1 horizontal:1 propagation:1 quality:1 perhaps:1 effect:1 evolution:2 hence:1 deal:1 distal:10 during:1 self:1 covering:1 motion:1 endpoint:2 attached:1 volume:2 significant:2 freeze:1 enter:1 had:1 robot:1 longer:3 movable:2 termed:1 certain:1 continue:1 came:1 caltech:2 inverted:1 seen:1 morgan:3 additional:1 redundant:5 period:1 signal:7 dashed:2 ii:2 desirable:1 technical:1 faster:2 cross:1 minimising:1 long:1 concerning:1 prediction:4 controller:23 expectation:1 represent:2 achieved:1 proposal:1 schematically:2 separately:1 source:1 extra:1 cart:5 jordan:17 call:1 intermediate:17 concerned:1 architecture:9 opposite:1 minimise:2 proceed:2 action:11 remark:1 useful:1 clear:1 amount:1 reduced:1 problematic:2 inhibitory:1 estimated:3 track:1 correctly:1 write:1 shall:1 vol:1 redundancy:6 drawn:1 changing:1 graph:1 merely:2 angle:1 decision:1 scaling:1 entirely:2 brody:5 guaranteed:1 summer:1 fold:1 truck:1 occur:1 precisely:1 encodes:1 speed:1 argument:1 answered:1 speedup:1 terminates:1 lnote:1 modification:3 happens:1 making:1 taken:2 equation:4 turn:1 eventually:3 count:1 needed:3 know:1 end:2 available:3 eight:2 appropriate:1 differentiated:1 batch:1 coin:1 original:1 hinge:1 plausibly:1 scholarship:1 comparatively:1 objective:2 move:1 added:1 question:1 occurs:1 degrades:1 said:1 september:1 gradient:11 simulated:2 evaluate:1 mail:1 manifold:6 argue:1 unstable:1 relationship:1 providing:1 balance:2 mexico:3 difficult:3 unfortunately:1 rise:1 policy:6 unknown:1 perform:1 upper:1 observation:3 finite:1 hinton:1 interacting:1 arbitrary:6 ordinate:1 required:1 california:1 address:1 able:1 program:1 explanation:1 event:1 natural:1 force:2 predicting:1 arm:1 improve:2 technology:1 picture:1 sept:1 faced:1 acknowledgement:1 expect:1 critic:7 balancing:3 surprisingly:1 supported:1 free:1 soon:1 organise:1 institute:1 differentiating:3 distributed:1 curve:5 dimension:1 world:7 forward:13 made:2 reinforcement:3 stuck:1 avoided:1 adaptive:1 nguyen:2 far:1 transaction:1 approximate:2 keep:1 dealing:1 assumed:1 search:2 why:2 ca:1 did:1 whole:1 noise:2 allowed:1 fashion:1 slow:1 lc:1 wish:4 lie:1 breaking:1 watkins:3 third:1 concern:1 sequential:1 cartesian:1 backpropagate:1 led:1 simply:1 explore:1 failed:1 flailing:1 corresponds:1 oct:1 sized:1 presentation:1 man:1 change:3 principal:1 called:3 internal:2 dept:1 tested:1 |
4,332 | 4,920 | Dimension-Free Exponentiated Gradient
Francesco Orabona
Toyota Technological Institute at Chicago
Chicago, USA
[email protected]
Abstract
I present a new online learning algorithm that extends the exponentiated gradient
framework to infinite dimensional spaces. My analysis shows that the algorithm is
implicitly able to estimate the L2 norm of the unknown
? competitor, U , achieving
a regret bound
of
the
order
of
O(U
log(U
T
+
1))
T ), instead of the standard
?
O((U 2 + 1) T ), achievable without knowing U . For this analysis, I introduce
novel tools for algorithms with time-varying regularizers, through the use of local
smoothness.
Through a lower bound, I also show that the algorithm is optimal up
p
to log(U T ) term for linear and Lipschitz losses.
1
Introduction
Online learning provides a scalable and flexible approach for solving a wide range of prediction
problems, including classification, regression, ranking, and portfolio management. These algorithms
work in rounds, where at each round a new instance is given and the algorithm makes a prediction.
After the true label of the instance is revealed, the learning algorithm updates its internal hypothesis.
The aim of the classifier is to minimize the cumulative loss it suffers due to its prediction, such as
the total number of mistakes.
Popular online algorithms for classification include the standard Perceptron and its many variants,
such as kernel Perceptron [6], and p-norm Perceptron [7]. Other online algorithms, with properties
different from those of the standard Perceptron, are based on multiplicative (rather than additive)
updates, such as Winnow [10] for classification and Exponentiated Gradient (EG) [9] for regression.
Recently, Online Mirror Descent (OMD)1 and has been proposed as a general meta-algorithm for
online learning, parametrized by a regularizer [16]. By appropriately choosing the regularizer, most
online learning algorithms are recovered as special cases of OMD. Moreover, performance guarantees can also be derived simply by instantiating the general OMD bounds to the specific regularizer
being used. So, for all the
? first-order online learning algorithms, it is possible to prove regret bounds
of the order of O(f (u) T ), where T is the number of rounds and f (u) is the regularizer used in
OMD, evaluated on the competitor vector u. Hence, different choices of the regularizer will give
rise to different algorithms and guarantees. For example, p-norm algorithms can be derived from
the squared Lp -norm regularization,
while EG can be derived from the
one. In particular
?
? entropic
2
for the Euclidean regularizer ? T kuk2 , we have
a
regret
bound
of
O(
T
(kuk
/?
+ ?)). Knowing
?
kuk it is possible to tune?
? to have a O(kuk T ) bound, that is optimal [1]. On the other hand, EG
has a regret bound of O( T log d), where d is the dimension of the space.
In this paper, I use OMD to extend EG to infinite dimensional spaces, through the use of a carefully
designed time-varying regularizer. The algorithm, that I call Dimension-Free Exponentiated Gradient (DFEG), does not need direct access to single components of the vectors, rather it only requires
1
The algorithm should be more correctly called Follow the Regularized Leader, however here I follow
Shalev-Shwartz in [16], and I will denote it by OMD.
1
to access them through inner products. Hence, DFEG can be used with kernels too, extending
for the
?
first time EG to the kernel domain. I prove a regret bound of O(kuk log(kukT + 1) T ). Up to logarithmic terms, the bound of DFEG is equal to the optimal bound obtained through the knowledge
of kuk, but it does not require the tuning of any parameter.
I built upon ideas of [19], but I designed my new algorithm as an instantiation of OMD, rather than
using an ad-hoc analysis. I believe that this route increases the comprehension of the inner working
of the algorithm, its relation to other algorithms, and it makes easier to extend it in other directions
as well. In order to analyze DFEG, I also introduce new and general techniques to cope with timevarying regularizers for OMD, using the local smoothness of the dual of the regularization function,
that might be of independent interest.
? I also extend and improve the lower bound in [19], to match
the upper bound of DFEG up to a log T term, and to show an implicit trade-off on the regret versus
different competitors.
1.1
Related works
Exponentiated gradient algorithms have been proposed by [9]. The algorithms have multiplicative
updates and regret bounds that depend logarithmically on the dimension of the input space. In
particular, they proposed a version of EG where the weights are not normalized, called EGU.
A closer algorithm to mine is the epoch-free in [19]. Indeed, DFEG is equivalent to theirs when used
on one dimensional problems. However, the extension to infinite dimensional spaces is nontrivial
and very different in nature from their extension to d-dimensional problems, that consists on running a copy of the algorithm independently on each coordinate. Their regret bound depends on the
dimension of the space and can neither be used with infinite dimensional spaces nor with kernels.
?
Vovk proposed
two algorithms for square loss, with regret bounds of O((kuk + Y ) T ) and
?
O(kuk T ) respectively, where Y is an upper bound on the range of the target values [20]. A
matching lower bound is also presented, proving the optimality of the second algorithm. However,
the algorithms seem specific to the square loss and it is not possible to adapt them
p to other losses.
Indeed, the lower bound I prove shows that for linear and Lipschitz losses a log(kukT ) term
is unavoidable. Moreover, the second algorithm, being an instantiation of the Aggregating Algorithm [21], does not seem to have an efficient implementation.
My algorithm also shares similarities in spirit with the family of self-confident algorithms [2, 7, 15],
in which the algorithm self-tunes its parameters based on internal estimates.
From the point of view of the proof technique, the primal-dual analysis of OMD is due to [15, 17].
Starting from the work of [8], it is now clear that OMD can be easily analyzed using only a few basic
convex duality properties. See the recent survey [16] for a lucid description of these developments.
The time-varying regularization for OMD has been explored in [4, 12, 15], but in none of these
works does the negative terms in the bound due to the time-varying regularizer play a decisive role.
The use of the local estimates of strong smoothness is new, as far as I know. A related way to have
a local analysis is through the local norms [16], but my approach is better tailored to my needs.
2
Problem setting and definitions
In the online learning scenario the learning algorithms work in rounds [3]. Let X a Euclidean vector
space2 , at each round t, an instance xt ? X, is presented to the algorithm, which then predicts a label
y?t ? R. Then, the correct label yt is revealed, and the algorithm pays a loss `(?
yt , yt ), for having
predicted y?t , instead of yt . The aim of the online learning algorithm is to minimize the cumulative
sum of the losses, on any sequence of data/labels {(xt , yt )}Tt=1 . Typical examples of loss functions
are, for example, the absolute loss, |?
yt ? yt |, and the hinge loss, max(1 ? y?t yt , 0). Note that the
loss function can change over time, so in the following I will denote by `t : R ? R the generic loss
function received by the algorithm at time t. In this paper I focus on linear prediction of the form
y?t = hwt , xt i, where wt ? X represents the hypothesis of the online algorithm at time t.
2
All the theorems hold also in general Hilbert spaces, but for simplicity of exposition I consider a Euclidean
setting.
2
Algorithm 1 Dimension-Free Exponentiated Gradient.
Parameters: 0.882 ? a ? 1.109, L > 0, ? > 0.
Initialize: ? 1 = 0 ? X, H0 = ?
for t = 1, 2, . . . do
Receive kxt k, where xt ? X
Set Ht = Ht?1 + L2 max kxt k, kxt k2
3
?
Set ?t = a Ht , ?t = Ht2
if k? t k == 0 then choose wt= 0
?t
else choose wt = ?t k?
exp k??ttk
tk
Suffer loss `t (hwt , xt i)
Update ? t+1 = ? t ? ?`t (hwt , xt i)xt
end for
We strive to design online learning algorithms for which it is possible to prove a relative regret
bound. Such analysis bounds the regret, that is the difference between the cumulative loss of the
PT
PT
algorithm, t=1 `t (hwt , xt i), and the one of an arbitrary and fixed competitor u, t=1 `t (hu, xt i).
We will consider L-Lipschitz losses, that is |`t (y) ? `t (y 0 )| ? L|y ? y 0 |, ?y, y 0 .
I now introduce some basic notions of convex analysis that are used in the paper. I refer to [14] for
definitions and terminology. I consider functions f : X ? R that are closed and convex. Given a
?
closed and convex function f with
domain S ? X, its Fenchel conjugate f : X ? R is defined as
f ? (u) = supv?S hv, ui ? f (v) . The Fenchel-Young inequality states that f (u) + f ? (v) ? hu, vi
for all v, u. A vector x is a subgradient of a convex function f at v if f (u) ? f (v) ? hu ? v, xi
for any u in the domain of f . The differential set of f at v, denoted by ?f (v), is the set of all
the subgradients of f at v. If f is also differentiable at v, then ?f (v) contains a single vector,
denoted by ?f (v), which is the gradient of f at v. Strong convexity and strong smoothness are key
properties in the design of online learning algorithms, they are defined as follows. A function f is
?-strongly convex with respect to a norm k?k if for any u, v in its domain, and any x ? ?f (u),
?
2
f (v) ? f (u) + hv ? u, xi + ku ? vk .
2
The Fenchel conjugate f ? of a ?-strongly convex function f is everywhere differentiable and
strongly smooth [8], this means that for all u, v ? X,
f ? (v) ? f ? (u) + hv ? u, ?f ? (u)i +
1
?-
1
2
ku ? vk? .
2?
In the remainder of the paper all the norms considered will be the L2 ones.
3
Dimension-Free Exponentiated Gradient
In this section I describe the DFEG algorithm. The pseudo-code is in Algorithm 1. It shares some
similarities with the exponentiated gradient with unnormalized weights algorithm [9], to the selftuning variant of exponentiated gradient in [15], and to the epoch-free algorithm in [19]. However,
note that it does not access to single coordinates of wt and xt , but only their inner products. Hence,
we expect the algorithm not to depend on the dimension of X, that can be even infinite. In other
words, DFEG can be used with kernels as well, on contrary of all the mentioned algorithms above.
For the DFEG algorithm we have the following regret bound, that will be proved in Section 4.
Theorem 1. Let 0.882 ? a ? 1.109, ? > 0, then, for any sequence of input vectors {xt }Tt=1 , any
sequence of L-Lipschitz convex losses {`t (?)}Tt=1 , and any u ? X, the following bound on the regret
holds for Algorithm 1
T
X
`t (hwt , xt i) ?
t=1
where HT = ? +
T
X
t=1
PT
t=1
`t (hu, xt i) ?
3
p
4 exp(1 + a1 )
?
+ akuk HT ln HT2 kuk ? 1 ,
L ?
L2 max(kxt k, kxt k2 ).
3
The bound has a logarithmic part, typical of the family of exponentiated gradient algorithms, but
instead of depending on the dimension, it depends on the norm of the competitor, kuk. Hence, the
regret bound of DFEG holds for infinite dimensional spaces as well, that is, it is dimension-free.
It is interesting to compare this bound with the usual
bound for online learning using an L2 regular?
t
izer. Using a time-varying regularizer ft (w) = ? kwk2 it is easy to see, e.g. [15], that the bound
?
would be3 O((kuk2 /? + ?) T ). If an upper
? bound U on kuk is known, we can use it to tune ? to
obtain an upper bound of ?
the order of O(U T ). On the other hand, we obtain for DFEG a bound
of O(kuk log(kukT + 1) T ), that is optimal bound, up to logarithmic terms, without knowing U .
So my bound goes to constant if the norm of the competitor goes to zero. However, note that, for
any fixed competitor, the gradient descent bound is asymptotically better.
The lower bound on the range of a we get comes from technical details of the analysis. The parameter a is directly linked to the leading constant of the regret bound; therefore, it is intuitive that the
range of acceptable values must have a lower bound different from zero. This is also confirmed by
the lower bound in Theorem 2 below.
Notice that the bound is data-dependent because it depends on the sequence of observed input vectors xt . A data-independent bound can be easily obtained from the upper bound on the norm of the
input vectors. The use of the function max(kxt k, kxt k2 ) is necessary to have such a data-dependent
bound and it seems that it cannot be avoided in order to prove the regret bound.
It is natural to ask if the log term in the bound can be avoided. Extending Theorem 7 in [19], we can
reply in the negative to this question. In particular, the following theorem shows that the regret of any
online learning algorithm has a satisfy to a trade-off between the guarantees against the competitor
with norm equal to zero and the ones against other competitors. A similar trade-off has been proven
in the expert settings [5].
Theorem 2. Fix a non-trivial vector space X , a specific online learning algorithm, and let the
sequence of losses be composed by linear losses. If the algorithm guarantees a zero regret against
the competitor with zero L2 norm, then there exists a sequence of T vectors in X , such that the
regret against any other competitor is ?(T ). On the other hand, if the algorithm guarantees a regret
at most of > 0 against the competitor with zero L2 norm, then, for any 0 < ? < 1, there exists a
T0 and a sequence of T ? T0 unitary norm vectors z t ? X , and a vector u ? X such that
v
u
? !
r
T
T
X
X
1 u
?kuk T
t
hwt , z t i ? (1 ? ?)kuk
hu, z t i ?
?2.
T log
log 2
3
t=1
t=1
The proof can be found in the supplementary material. It isqpossible to show that the optimal ? is of
the order of log1 T , so that the leading constant approaches log1 2 ? 1.2011 when T goes to infinity.
?
It is also interesting to note ?
that an L2 regularizer suffers a loss of O( T ) against a competitor with
zero norm, that cancels the log T term.
4
Analysis
In this section I prove my main result. I will first briefly introduce the general OMD algorithm with
time-varying regularizers on which my algorithm is based.
4.1
Online mirror descent and local smoothness
Algorithm 2 is a generic meta-algorithm for online learning. Most of the online learning algorithms
can be derived from it, choosing the functions ft and the vectors z t . The following lemma, that is
a generalization of Corollary 4 in [8], Corollary 3 in [4], and Lemma 1 in [12], is the main tool to
prove the regret bound for the DFEG algorithm. The proof is in the supplementary material.
?
3
Despite what claimed in Section 1 of [19], the use of the time-varying regularizer ft (w) =
guarantees a sublinear regret for unconstrained online convex optimization, for any ? > 0.
4
t
kwk2
?
Algorithm 2 Time-varying Online Mirror Descent
Parameters: A sequence of convex functions f1 , f2 , . . . defined on S ? X.
Initialize: ? 1 = 0 ? X
for t = 1, 2, . . . do
Choose wt = ?ft? (? t )
Observe z t ? X
Update ? t+1 = ? t + z t
end for
Lemma 1. Assume Algorithm 2 is run with functions f1 , f2 , . . . defined on a common domain
S ? X. Then for any w0t , u ? S we have
T
X
hz t , u ? w0t i ? fT (u) +
t=1
T
X
?
ft? (? t+1 ) ? ft?1
(? t ) ? hw0t , z t i ,
t=1
where we set f0? (w01 ) =
max0?? ?1 k?2 ft? (? t + ? z t )k
0.
Moreover, if f1? , f2? , . . . are twice differentiable, and
? ?t , then we have
?
?
ft? (? t+1 ) ? ft?1
(? t ) ? hwt , z t i ? ft? (? t ) ? ft?1
(? t ) +
?t
2
kz t k .
2
Note that the above Lemma is usually stated assuming the strong convexity of ft , that is equivalent
to the strong smoothness of ft? , that in turns for twice differentiable functions is equivalent to a
global bound on the norm of the Hessian of ft? (see Theorem 2.1.6 in [11]). Here I take a different
route, assuming the functions ft? to be twice differentiable, but using the weaker hypothesis of local
boundedness of the Hessian of ft? . Hence, for twice differentiable conjugate functions, this bound is
always tighter than the ones in [4, 8, 12]. Indeed, in our case, the global strong smoothness cannot
be used to prove any meaningful regret bound.
We derive the Dimension-Free Exponentiated Gradient from the general OMD above. Set in Algorithm 2 ft (w) = ?t kwk(log(?t kwk) ? 1), where ?t and ?t are defined in Algorithm 1, and
z t = ??`t (hwt , xt i)xt . The proof idea of my theorem is the following. First, assume that we
are on a round where we have a local upper bound on the norm of the Hessian ft? .?The usual approach in these kind of proof is to have a regularizer that is growing over time as t, so that the
?
(? t ) are negative and can be safely discarded. At the same time the sum of the
terms ft? (? t ) ? ft?1
?
?
squared norms of the gradients will typically be of the order of O( T ), giving us a O( T ) regret
bound (see for example the proofs in [4]). However, following this approach in
? DFEG we would
have that the sum of norms of the squared gradients grows much faster than O( T ). This is due to
the fact that the global strong smoothness is too small. Hence I introduce a different proof method.
In the following, I will show the surprising result that with my choice of the regularizers ft , the
?
terms ft? (? t ) ? ft?1
(? t ) and the squared norm of the gradient cancel out. Notice that already in
[12, 13] it has been advocated not to discard those terms to obtain tighter bounds. Here the same
terms play a major role in the proof and they are present thanks to the time-varying regularization.
This is in agreement with Theorem 9 in [19] that rules out algorithms with a fixed regularizer to
obtain regret bounds like Theorem 1.
It remains to bound the regret in the rounds where we do not have an upper bound on the norm of
the Hessian. In these rounds I show that the norm of wt (and ? t ) is small enough so that the regret
is still bounded, thanks to the choice of ?t .
4.2
Proof of the main result
We start defining the new regularizer and show its properties in the following Lemma (proof
in the supplementary material). Note the similarities with EGU, where the regularizer is
Pd
d
i=1 wi (log(wi ) ? 1), w ? R , wi ? 0 [9].
Lemma 2. Define f (w) = ?kwk(ln(?kwk) ? 1), for ?, ? > 0. The following properties hold
? f ? (?) =
?
?
exp k?k
? .
5
? ?f ? (?) =
?
?k?k
? k?2 f ? (?)k2 ?
exp k?k
? .
1
? min(k?k,?)
exp( k?k
? ).
Equipped with a local upper bound on the Hessian of f ? , we can now use Lemma 1. We notice
that?Lemma 1 also guides
? us in the choice of the sequences ?t . In fact if we want the regret to be
?
?
O( T ), ?t must be O( T ) too.
In the proof of Theorem 1 we also use the following three technical lemmas, whose proofs are in
the supplementary material. The first two are used to upper bound the exponential function with
quadratic functions.
Lemma 3. Let M > 0, then for any
p+
exp(M )?p 2
x
M2
exp(M )
M 2 +1
? p ? exp(M ), and 0 ? x ? M , we have exp(x) ?
.
Lemma 4. Let M > 0, then for any 0 ? x ? M , we have exp(x) ? 1 + x +
Lemma 5. For any p, q > 0 we have that
?2
p
?
?2
p+q
?
q
3
(p+q) 2
exp(M )?1?M 2
x .
M2
.
Proof of Theorem 1. In the following denote by n(x) := max(kxk, kxk2 ). We will use Lemma 1
to upper bound the regret of DFEG. Hence, using the notation in Algorithm 1, set z t =
??`t (hwt , xt i)xt , and ft (w) = ?t kwk(log(?t kwk) ? 1). Observe that, by the hypothesis on
`t , we have kz t k ? Lkxt k. We first consider two cases, based on the norm of ? t .
Case 1: k? t k > ?t + kz t k.
With this assumption, and using the third property of Lemma 2, we have
k? t k+kz t k
zt k
exp
exp k?t +?
?
?
t
t
?
max k?2 ft? (? t + ? z t )k ? max
.
0?? ?1
0?? ?1 ?t min(k? t + ? z t k, ?t )
?t ? t
2
?
We now use the second statement of Lemma 1. We have that ?t kz2 t k + ft? (? t ) ? ft?1
(? t ) can be
upper bounded by
kz t k2
k? t k + kz t k
?t
k? t k
?t?1
k? t k
exp
+
exp
?
exp
2?t ?t
?t
?t
?t
?t?1
?t?1
kz t k2
k? t k + kz t k
?t
k? t k
?t?1
k? t k
?
exp
+
exp
?
exp
2?t ?t
?t
?t
?t
?t?1
?t
k? t k
kz t k
kz t k2
a
a
= exp
exp
+
?
.
(1)
?t
2aHt2
?t
Ht
Ht?1
We will now prove that the term in the parenthesis of (1) is negative. It can be rewritten as
kz t k2 Ht?1 exp kz?ttk ? 2a2 Ht?1 L2 n(xt ) ? 2a2 L4 (n(xt ))2
kz t k
a
a
kz t k2
exp
+ ?
=
,
2aHt2
?t
Ht Ht?1
2aHt2 Ht?1
kz t k
1
2
?t ? a , so we now use Lemma 3 with p = 2a and
1
exp( a
)
because 1 +1 ? 2a2 ? exp( a1 ), ? 0.825 ? a ? 1.109, as it
a2
and from the expression of ?t we have that
M = 1/a. These are valid settings
can be verified numerically.
kz t k2
kz t k
a
a
exp
+
?
2
2aHt
?t
Ht
Ht?1
2
2
2
2
kz t k Ht?1 2a + a (exp( a1 ) ? 2a2 ) kz?t2k ? 2a2 Ht?1 L2 n(xt ) ? 2a2 L4 (n(xt ))2
t
?
2aHt2 Ht?1
2
2
tk
L2 kxt k2 Ht?1 2a2 + a2 (exp( a1 ) ? 2a2 ) L akx
? 2a2 Ht?1 L2 kxt k2 ? 2a2 L4 kxt k2
2H
t
?
2aHt2 Ht?1
L4 kxt k4 (exp( a1 ) ? 4a2 )
?
? 0,
(2)
2aHt2 Ht?1
6
where in last step we used the fact that exp( a1 ) ? 4a2 , ? a ? 0.882, as again it can be verified
numerically.
Case 2: k? t k ? ?t + kz t k.
We use the first statement of Lemma 1, setting w0t = wt if k?k =
6 0, and w0t = 0 otherwise. In this
0
way, from the second property of Lemma 2, we have that kwt k ? ?1t exp( k??ttk ). Note that any other
choice of w0t satisfying the the previous relation on the norm of w0t would have worked as well.
?t
k? t+1 k
?t?1
k? t k
?
?
ft (? t+1 ) ? ft?1 (? t ) =
exp
?
exp
?t
?t
?t?1
?t?1
k
exp kz
?t
Ht?1 ? Ht
k? t k
?t
kz t k
?t?1
k? t k
a Ht
? exp
exp
?
= a exp
. (3)
?t
?t
?t
?t?1
?t
Ht?1 Ht
Remembering that kz?ttk ? a1 , and using Lemma 4 with M = a1 , we have
kz t k
Lkxt k
?
? Ht?1 ? L2 n(xt ) ? Ht?1 exp ?
? Ht?1 ? L2 kxt k2
Ht?1 exp
a Ht
a Ht
1
1 L2 kxt k2
Lkxt k
2
+ a exp
?1?
? Ht?1 1 + ?
? Ht?1 ? L2 kxt k2
a
a
a2 Ht
a Ht
1
LHt?1 kxt k
1 L2 Ht?1 kxt k2
?
+ exp
=
?1?
? L2 kxt k2
a
a
Ht
a Ht
1
1
LHt?1 kxt k
LHt?1 kxt k
2
2
?
?
+ L kxt k exp
?2?
?
,
(4)
?
a
a
a Ht
a Ht
where in the last step we used the fact that exp( a1 ) ? 2 ? a1 ? 0, ? a ? 0.873, verified numerically.
Putting together (3) and (4), we have
k? t k Lkxt k
?
? hw0t , z t i
ft? (? t+1 ) ? ft?1
(? t ) ? hw0t , z t i ? exp
3
?t
2
H
t
k? t k Lkxt k
k? t k Lkxt k
k? t k Lkxt k
0
? exp
+ Lkwt kkxt k ? exp
+ exp
3
3
?t
?t
?t
?t
Ht2
Ht2
1
2 exp(1 + a )Lkxt k
k? t k Lkxt k
?
,
(5)
= 2 exp
3
3
?t
Ht2
Ht2
where in the second inequality we used the Cauchy-Schwarz inequality and the Lipschitzness of `t ,
in the third the bound on the norm of w0t , and in the last inequality the fact that k? t k ? ?t + kz t k
implies exp( k??ttk ) ? exp(1 + a1 ). Putting together (2) and (5) and summing over t, we have
T
X
T
T
X
2 exp(1 + a1 )Lkxt k
2 X exp(1 + a1 )L2 kxt k
?
ft? (? t+1 ) ? ft?1
(? t ) ? hw0t , z t i ?
?
P
3
L t=1 ( tj=1 L2 kxt k + ?) 32
Ht2
t=1
t=1
?
?
T
4 exp(1 + a1 ) X
4 exp(1 + a1 )
1
1
?q
??
?
q
?
?
,
Pt
Pt?1 2
L
L ?
L2 kx k + ?
L kx k + ?
t=1
j=1
t
j=1
t
where in the third inequality we used Lemma 5.
The stated bound can be obtained observing that `t (hwt , xt i) ? `t (hu, xt i) ? hu ? wt , z t i, from
the convexity of `t and the definition of z t .
5
Experiments
A full empirical evaluation of DFEG is beyond the scope of this paper. Here I just want to show the
empirical effect of some of its theoretical properties. In all the experiments I used the absolute loss,
7
18
16
cadata dataset
4
Synthetic dataset
2
20
DFEG
Kernel GD, eta=0.05
Kernel GD, eta=0.1
Kernel GD, eta=0.2
x 10
1.95
cpusmall dataset
14000
DFEG
Kernel GD, various eta
DFEG
Kernel GD, various eta
12000
1.9
1.85
14
10000
10
1.8
1.75
8
1.7
6
1.65
4
1.6
2
1.55
0
0
Total loss
Total loss
Regret
12
8000
6000
4000
2000
4000
6000
Numer of samples
8000
10000
1.5 ?2
10
?1
10
0
10
eta
1
10
2
10
2000 ?1
10
0
1
10
10
2
10
eta
Figure 1: Left: regret versus number of input vectors on synthetic dataset. Center and Right: total loss for
DFEG and Kernel GD on the cadata and cpusmall dataset respectively.
so L = 1, a is set to the minimal
value allowed by Theorem 1 and ? = 1. I denote by Kernel GD
?
the OMD with the regularizer ?t kwk2 .
First, I generated synthetic data as in the proof of Theorem 2, that is the input vectors are all the same
and the yt is equal to 1 for the t even and ?1 for the others. In this case we know that the optimal
predictor has norm equal to zero and we can exactly calculate the value of the regret. Figure 1(left)
I have plotted the regret as a function of the number of input vectors. As ?
predicted by the theory,
DFEG has a constant regret, while Kernel GD has a regret of the form O(? T ). Hence, it can have
a constant regret only when ? is set to zero, and this can be done only with prior knowledge of kuk,
that is impossible in practical applications.
For the second experiment, I analyzed the behavior of DFEG on two real word regression datasets,
cadata and cpusmall4 . I used the Gaussian kernel with variance equal to the average distance between training input vectors. I have plotted in Figure 1(central) the final cumulative loss of DFEG
and the ones of GD with varying values of ?. We see that, while the performance of Kernel GD can
be better of the one of DFEG, as predicted by the theory, its performance varies greatly in relation
to ?. On the other hand the performance of DFEG is close to the optimal one without the need to
tune any parameters. It is also worth noting the catastrophic result we can get from a wrong tuning
of ? in GD. Similar considerations hold for the cpusmall dataset in Figure 1(right).
6
Discussion
I have presented a new algorithm for online learning, the first one in the family of exponentiated
gradient to be dimension-free. Thanks
? to new analysis tools, I have proved that DFEG attains a
regret bound of O(U log(U T + 1)) T ), without any
? parameter to tune. I also proved a lower
bound that shows that the algorithm is optimal up to log T term for linear and Lipschitz losses.
The problem of deriving a regret bound that depends on the sequence of theq
gradients, rather than
PT
on the xt , remains open. Resolving this issue would result in the tighter O(
t=1 `t (hw t , xt i))
regret bounds in the case that the `t are smooth [18]. The difficulty in proving these kind of bounds
seem to lie in the fact that (2) is negative only because Ht ? Ht?1 is bigger than kz t k2 .
Acknowledgments
I am thankful to Jennifer Batt for her help and support during the writing of this paper, to Nicol`o
Cesa-Bianchi for the useful comments on an early version of this work, and to Tamir Hazan for his
writing style suggestions. I also thank the anonymous reviewers for their precise comments, which
helped me to improve the clarity of this manuscript.
4
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
8
References
[1] J. Abernethy, A. Agarwal, P. L. Bartlett, and A. Rakhlin. A stochastic view of optimal regret
through minimax duality. In COLT, 2009.
[2] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. J. Comput. Syst. Sci., 64(1):48?75, 2002.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[4] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[5] E. Even-Dar, M. Kearns, Y. Mansour, and J. Wortman. Regret to the best vs. regret to the
average. In N. H. Bshouty and C. Gentile, editors, COLT, volume 4539 of Lecture Notes in
Computer Science, pages 233?247. Springer, 2007.
[6] Y. Freund and R. E. Schapire. Large margin classification using the Perceptron algorithm.
Machine Learning, pages 277?296, 1999.
[7] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265?299, 2003.
[8] S. M. Kakade, S. Shalev-Shwartz, and A. Tewari. Regularization techniques for learning with
matrices. CoRR, abs/0910.0610, 2009.
[9] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?63, January 1997.
[10] N. Littlestone. Learning quickly when irrelevant attributes abound: a new linear-threshold
algorithm. Machine Learning, 2(4):285?318, 1988.
[11] Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87.
Springer, 2003.
[12] F. Orabona and K. Crammer. New adaptive algorithms for online classification. In J. Lafferty,
C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural
Information Processing Systems 23, pages 1840?1848. 2010.
[13] F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with
applications to classification and regression, 2013. arXiv:1304.2994.
[14] R. T. Rockafellar. Convex Analysis (Princeton Mathematical Series). Princeton University
Press, 1970.
[15] S. Shalev-Shwartz. Online learning: Theory, algorithms, and applications. Technical report,
The Hebrew University, 2007. PhD thesis.
[16] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends
in Machine Learning, 4(2), 2012.
[17] S. Shalev-Shwartz and Y. Singer. A primal-dual perspective of online learning algorithms.
Machine Learning Journal, 2007.
[18] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In J. Lafferty,
C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural
Information Processing Systems 23, pages 2199?2207. 2010.
[19] M. Streeter and B. McMahan. No-regret algorithms for unconstrained online convex optimization. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 25, pages 2411?2419. 2012.
[20] V. Vovk. On-line regression competitive with reproducing kernel hilbert spaces. In Jin-Yi
Cai, S.Barry Cooper, and Angsheng Li, editors, Theory and Applications of Models of Computation, volume 3959 of Lecture Notes in Computer Science, pages 452?463. Springer Berlin
Heidelberg, 2006.
[21] V. G. Vovk. Aggregating strategies. In COLT, pages 371?386, 1990.
9
| 4920 |@word briefly:1 version:2 achievable:1 norm:27 seems:1 open:1 hu:7 boundedness:1 contains:1 series:1 recovered:1 com:1 surprising:1 must:2 chicago:2 additive:1 designed:2 update:5 v:1 warmuth:1 provides:1 mathematical:1 direct:1 differential:1 prove:9 consists:1 introductory:1 introduce:5 indeed:3 behavior:1 nor:1 growing:1 equipped:1 kkxt:1 abound:1 moreover:3 bounded:2 notation:1 what:1 kind:2 lipschitzness:1 guarantee:6 pseudo:1 safely:1 exactly:1 classifier:1 k2:19 supv:1 wrong:1 local:9 aggregating:2 mistake:1 despite:1 lugosi:1 might:1 twice:4 range:4 practical:1 acknowledgment:1 regret:43 empirical:2 matching:1 word:2 regular:1 get:2 cannot:2 close:1 impossible:1 writing:2 www:1 equivalent:3 reviewer:1 yt:9 center:1 go:3 williams:2 omd:14 independently:1 convex:14 survey:1 starting:1 simplicity:1 m2:2 rule:1 deriving:1 his:1 proving:2 notion:1 coordinate:2 target:1 play:2 pt:6 hypothesis:4 agreement:1 logarithmically:1 trend:1 satisfying:1 predicts:1 observed:1 role:2 ft:34 csie:1 hv:3 calculate:1 culotta:2 trade:3 technological:1 mentioned:1 pd:1 convexity:3 ui:1 nesterov:1 mine:1 depend:2 solving:1 upon:1 f2:3 easily:2 various:2 regularizer:16 fast:1 describe:1 zemel:2 choosing:2 shalev:5 h0:1 abernethy:1 whose:1 supplementary:4 otherwise:1 final:1 online:30 hoc:1 sequence:10 kxt:22 differentiable:6 cai:1 product:2 remainder:1 description:1 intuitive:1 w01:1 extending:2 tk:2 thankful:1 depending:1 derive:1 help:1 bshouty:1 advocated:1 received:1 strong:7 predicted:3 come:1 implies:1 direction:1 correct:1 attribute:1 stochastic:2 libsvmtools:1 material:4 require:1 fix:1 generalization:1 f1:3 anonymous:1 ntu:1 tighter:3 comprehension:1 extension:2 hold:5 considered:1 exp:55 scope:1 major:1 entropic:1 a2:15 early:1 hwt:10 label:4 schwarz:1 tool:3 ttk:5 always:1 gaussian:1 aim:2 rather:4 varying:10 timevarying:1 corollary:2 derived:4 focus:1 vk:2 greatly:1 attains:1 am:1 dependent:2 typically:1 her:1 relation:3 theq:1 classification:6 flexible:1 dual:3 denoted:2 issue:1 colt:3 development:1 w0t:7 special:1 initialize:2 equal:5 having:1 represents:1 cancel:2 others:1 report:1 few:1 composed:1 kwt:1 t2k:1 ab:1 interest:1 evaluation:1 numer:1 analyzed:2 primal:2 tj:1 regularizers:4 closer:1 necessary:1 euclidean:3 taylor:2 littlestone:1 plotted:2 theoretical:1 minimal:1 lucid:1 instance:3 fenchel:3 eta:7 cpusmall:3 predictor:2 wortman:1 too:3 varies:1 my:10 synthetic:3 confident:2 thanks:3 gd:11 off:3 together:2 quickly:1 squared:4 again:1 unavoidable:1 management:1 central:1 choose:3 cesa:4 thesis:1 expert:1 strive:1 leading:2 style:1 li:1 syst:1 rockafellar:1 satisfy:1 ranking:1 vi:1 ad:1 depends:4 multiplicative:2 helped:1 view:2 decisive:1 closed:2 observing:1 hazan:2 analyze:1 linked:1 kwk:6 start:1 competitive:1 minimize:2 square:2 variance:1 none:1 confirmed:1 worth:1 suffers:2 definition:3 competitor:13 against:6 proof:14 proved:3 dataset:6 popular:1 ask:1 kukt:3 knowledge:2 hilbert:2 carefully:1 auer:1 manuscript:1 follow:2 evaluated:1 done:1 strongly:3 just:1 implicit:1 reply:1 hand:4 working:1 believe:1 grows:1 usa:1 effect:1 normalized:1 true:1 regularization:5 hence:8 eg:6 round:8 during:1 self:3 game:1 unnormalized:1 generalized:1 tt:3 duchi:1 consideration:1 novel:1 recently:1 common:1 volume:3 extend:3 theirs:1 kwk2:3 numerically:3 refer:1 cambridge:1 egu:2 smoothness:9 tuning:2 unconstrained:2 shawe:2 portfolio:1 access:3 f0:1 similarity:3 recent:1 perspective:1 winnow:1 irrelevant:1 discard:1 scenario:1 route:2 claimed:1 meta:2 inequality:5 yi:1 gentile:3 remembering:1 barry:1 resolving:1 full:1 smooth:2 technical:3 match:1 adapt:1 faster:1 space2:1 bigger:1 a1:15 parenthesis:1 prediction:5 scalable:1 regression:5 variant:2 instantiating:1 basic:3 arxiv:1 kernel:16 tailored:1 agarwal:1 receive:1 want:2 else:1 appropriately:1 comment:2 hz:1 contrary:1 lafferty:2 spirit:1 seem:3 sridharan:1 call:1 unitary:1 noting:1 revealed:2 easy:1 enough:1 inner:3 idea:2 knowing:3 t0:2 expression:1 bartlett:2 suffer:1 hessian:5 dar:1 useful:1 tewari:2 clear:1 tune:5 http:1 schapire:1 notice:3 correctly:1 key:1 putting:2 terminology:1 threshold:1 achieving:1 k4:1 clarity:1 neither:1 kuk:14 verified:3 ht:43 asymptotically:1 subgradient:2 sum:3 run:1 everywhere:1 extends:1 family:3 acceptable:1 bound:64 pay:1 quadratic:1 nontrivial:1 infinity:1 worked:1 aht:1 optimality:1 min:2 subgradients:1 conjugate:3 wi:3 lp:1 tw:1 kakade:1 ln:2 remains:2 jennifer:1 turn:1 cjlin:1 singer:2 know:2 end:2 rewritten:1 observe:2 generic:2 robustness:1 weinberger:1 running:1 include:1 hinge:1 giving:1 question:1 already:1 strategy:1 usual:2 gradient:20 distance:1 thank:1 sci:1 berlin:1 parametrized:1 kz2:1 me:1 cauchy:1 trivial:1 assuming:2 code:1 hebrew:1 statement:2 negative:5 rise:1 stated:2 implementation:1 design:2 zt:1 unknown:1 bianchi:4 upper:11 francesco:2 datasets:2 discarded:1 descent:6 jin:1 january:1 defining:1 precise:1 mansour:1 reproducing:1 arbitrary:1 able:1 beyond:1 below:1 usually:1 built:1 including:1 max:7 natural:1 difficulty:1 regularized:1 kivinen:1 minimax:1 improve:2 log1:2 epoch:2 prior:1 l2:21 nicol:1 relative:1 freund:1 loss:26 expect:1 lecture:3 sublinear:1 interesting:2 suggestion:1 proven:1 versus:3 srebro:1 foundation:1 editor:5 share:2 course:1 last:3 copy:1 free:9 guide:1 exponentiated:13 weaker:1 perceptron:5 institute:1 wide:1 burges:1 absolute:2 dimension:12 valid:1 cumulative:4 tamir:1 kz:26 adaptive:3 avoided:2 far:1 cope:1 cadata:3 implicitly:1 global:3 instantiation:2 summing:1 leader:1 shwartz:5 xi:2 ht2:7 streeter:1 nature:1 ku:2 lht:3 heidelberg:1 bottou:1 domain:5 main:3 noise:1 allowed:1 akx:1 cooper:1 pereira:1 exponential:1 comput:1 lie:1 kxk2:1 mcmahan:1 toyota:1 third:3 young:1 hw:1 theorem:14 kuk2:2 specific:3 xt:27 explored:1 rakhlin:1 exists:2 corr:1 mirror:4 phd:1 kx:2 margin:1 easier:1 logarithmic:3 simply:1 kxk:1 dfeg:25 springer:3 exposition:1 orabona:4 lipschitz:5 change:1 infinite:6 typical:2 vovk:3 wt:8 lemma:20 max0:1 total:4 called:2 kearns:1 duality:2 catastrophic:1 meaningful:1 l4:4 internal:2 support:1 crammer:2 princeton:2 |
4,333 | 4,921 | Generalizing Analytic Shrinkage for Arbitrary
Covariance Structures
Daniel Bartz
Department of Computer Science
TU Berlin, Berlin, Germany
[email protected]
?
Klaus-Robert Muller
TU Berlin, Berlin, Germany
Korea University, Korea, Seoul
[email protected]
Abstract
Analytic shrinkage is a statistical technique that offers a fast alternative to crossvalidation for the regularization of covariance matrices and has appealing consistency properties. We show that the proof of consistency requires bounds on
the growth rates of eigenvalues and their dispersion, which are often violated in
data. We prove consistency under assumptions which do not restrict the covariance
structure and therefore better match real world data. In addition, we propose an
extension of analytic shrinkage ?orthogonal complement shrinkage? which adapts
to the covariance structure. Finally we demonstrate the superior performance of
our novel approach on data from the domains of finance, spoken letter and optical
character recognition, and neuroscience.
1
Introduction
The estimation of covariance matrices is the basis of many machine learning algorithms and estimation procedures in statistics. The standard estimator is the sample covariance matrix: its entries are
unbiased and consistent [1]. A well-known shortcoming of the sample covariance is the systematic
error in the spectrum. In particular for high dimensional data, where dimensionality p and number
of observations n are often of the same order, large eigenvalues are over- und small eigenvalues
underestimated. A form of regularization which can alleviate this bias is shrinkage [2]: the convex
combination of the sample covariance matrix S and a multiple of the identity T = p?1 trace(S)I,
Csh = (1 ? ?)S + ?T,
(1)
has potentially lower mean squared error and lower bias in the spectrum [3]. The standard procedure
for chosing an optimal regularization for shrinkage is cross-validation [4], which is known to be
time consuming. For online settings CV can become unfeasible and a faster model selection method
is required. Recently, analytic shrinkage [3] which provides a consistent analytic formula for the
above regularization parameter ? has become increasingly popular. It minimizes the expected mean
squared error of the convex combination with a computational cost of O(p2 ), which is negligible
when used for algorithms like Linear Discriminant Analysis (LDA) which are O(p3 ).
The consistency of analytic shrinkage relies on assumptions which are rarely tested in practice [5].
This paper will therefore aim to render the analytic shrinkage framework more practical and usable
for real world data. We contribute in three aspects: first, we derive simple tests for the applicability
of the analytic shrinkage framework and observe that for many data sets of practical relevance the
assumptions which underly consistency are not fullfilled. Second, we design assumptions which
better fit the statistical properties observed in real world data which typically has a low dimensional structure. Under these new assumptions, we prove consistency of analytic shrinkage. We
show a counter-intuitive result: for typical covariance structures, no shrinkage ?and therefore no
regularization? takes place in the limit of high dimensionality and number of observations. In practice, this leads to weak shrinkage and degrading performance. Therefore, third, we propose an extension of the shrinkage framework: automatic orthogonal complement shrinkage (aoc-shrinkage)
1
takes the covariance structure into account and outperforms standard shrinkage on real world data at
a moderate increase in computation time. Note that proofs of all theorems in this paper can be found
in the supplemental material.
2
Overview of analytic shrinkage
To derive analytic shrinkage, the expected mean squared error of the shrinkage covariance matrix
eq. (1) as an estimator of the true covariance matrix C is minimized:
2
?
(2)
? = arg min R(?) := arg min E
C ? (1 ? ?)S ? ?T
?
?
(
)
n
o
h
X
2 i
= arg min
2? Cov Sij , Tij ? Var Sij
+ ?2 E Sij ? Tij
+ Var Sij
(3)
?
i,j
n
o
Var
S
?
Cov
S
,
T
ij
ij
ij
i,j
h
.
2 i
P
E
S
?
T
ij
ij
i,j
P
=
? is obtained by replacing expectations with sample estimates:
The analytic shrinkage estimator ?
2
X
1
1X
d Sij =
Var
xis xjs ?
xit xjt
(n ? 1)n s
n t
(
)
X X
X
X
1
1
2
2
2
2
d Sii , Tii =
xis xks ?
x
x 0
Cov
(n ? 1)np
n t it 0 it
s
t
k
b (Sij ? Tij )2 = (Sij ? Tij )2
E
? are based on analysis of a sequence of statistical models
Theoretical results on the estimator ?
indexed by n. Xn denotes a pn ? n matrix of n iid observations of pn variables with mean zero
and covariance matrix ?n . Yn = ?Tn Xn denotes the same observations in their eigenbasis, having
n
diagonal covariance ?n = ?Tn ?n ?n . Lower case letters xnit and yit
denote the entries of Xn and
? is its consistency in the large n, p
Yn , respectively1 . The main theoretical result on the estimator ?
limit [3]. A decisive role is played by an assumption on the eighth moments2 in the eigenbasis:
Assumption 2 (A2, Ledoit/Wolf 2004 [3]). There exists a constant K2 independent of n such that
p?1
n
pn
X
n 8
E[(yi1
) ] ? K2 .
i=1
3
Implicit assumptions on the covariance structure
From the assumption on the eighth moments in the eigenbasis, we derive requirements on the eigenvalues which facilitate an empirical check:
Theorem 1 (largest eigenvalue growth rate). Let A2 hold. Then, there exists a limit on the growth
rate of the largest eigenvalue
?1n = max Var(yin ) = O p1/4
.
n
i
Theorem 2 (dispersion growth rate). Let A2 hold. Then, there exists a limit on the growth rate of
the normalized eigenvalue dispersion
X
X
dn = p?1
(?i ? p?1
?j )2 = O (1) .
n
n
i
j
1
We shall often drop the sequence index n and the observation index t to improve readability of formulas.
eighth moments arise because Var(Sij ), the variance of the sample covariance, is of fourth order and has
to converge. Nevertheless, even for for non-Gaussian data convergence is fast.
2
2
model A
model B
dispersion and largest EV
4
40
model B
20
100
10
50
sample dispersion
max(EV)
3.5
30
3
20
2.5
10
2
100
200
300
400
0
500
0
100
200
300
400
max(EV)
covariance matrices
normalized sample dispersion
model A
0
500
dimensionality
Figure 1: Covariance matrices and dependency of the largest eigenvalue/dispersion on the dimensionality. Average over 100 repetitions.
normalized sample dispersion
600 10
sample dispersion
max(EV)
100
BCI EEG data
40
200 100
200
20
20
100 50
100
400
5
50
0
0
ISOLET spoken letters
40
200
0
500
1000
#assets
0
0
100
200
#pixels
0
0
0
dimensionality
200
400
#features
0
600
0
0
200
#features
max(EV)
USPS hand?written digits
US stock market
150
0
400
Figure 2: Dependency of the largest eigenvalue/dispersion on the dimensionality. Average over 100
random subsets.
The theorems restrict the covariance structure of the sequence of models when the dimensionality
increases. To illustrate this, we design two sequences of models A and B indexed by their dimensionality p, in which dimensions xpi are correlated with a signal sp :
xpi
=
(0.5 + bpi ) ? ?pi + ?cpi sp , with probability PsA/B (i),
(0.5 + bpi ) ? ?pi ,
else.
(4)
where bpi and cpi are uniform random from [0, 1], sp and pi are standard normal, ? = 1, PsB (i) = 0.2
and PsA (i) = (i/10 + 1)?7/8 (power law decay). To avoid systematic errors, we hold the ratio of
observations to dimensions fixed: np /p = 2.
To the left in Figure 1, covariance matrices are shown: For model A, the matrix is dense in the
upper left corner, the more dimensions we add the more sparse the matrix gets. For model B,
correlations are spread out evenly. To the right, normalized sample dispersion and largest eigenvalue
are shown. For model A, we see the behaviour from the theorems: the dispersion is bounded, the
largest eigenvalue grows with the fourth root. For model B, there is a linear dependency of both
dispersion and largest eigenvalue: A2 is violated.
For real world data, we measure the dependency of the largest eigenvalue/dispersion on the dimensionality by averaging over random subsets. Figure 2 shows the results for four data sets3 : (1) New
York Stock Exchange, (2) USPS hand-written digits, (3) ISOLET spoken letters and (4) a Brain
Computer Interface EEG data set. The largest eigenvalues and the normalized dispersions (see Figure 2) closely resemble model B; a linear dependence on the dimensionality which violates A2 is
visible.
3
for details on the data sets, see section 5.
3
4
Analytic shrinkage for arbitrary covariance structures
We replace A2 by a weaker assumption on the moments in the basis of the observations X which
does not impose any constraints on the covariance structure4 :
Assumption 20 (A20 ). There exists a constant K2 independent of p such that
?1
p
p
X
E[(xpi1 )8 ] ? K2 .
i=1
Standard assumptions For the proof of consistency, the relationship between dimensionality and
number of observations has to be defined and a weak restriction on the correlation of the products
of uncorrelated variables is necessary. We use slightly modified versions of the original assumptions [3].
Assumption 10 (A10 , Kolmogorov asymptotics). There exists a constant K1 , 0 ? K1 ? ? independent of p such that
lim p/np = K1 .
p??
Assumption 30 (A30 ).
P
lim
i,j,kl,l?Qp
p p
p p 2
Cov[yi1
yj1 , yk1
yl1 ]
|Qp |
p??
=0
where Qp is the set of all quadruples consisting of distinct integers between 1 and p.
Additional Assumptions A10 to A30 subsume a wide range of dispersion and eigenvalue configurations. To investigate the role which this plays, we categorize sequences by adding an additional
parameter k. It will prove essential for the limit behavior of optimal shrinkage and the consistency
of analytic shrinkage:
Assumption 4 (A4, growth rate of the normalized dispersion). Let ?i denote the eigenvalues of C.
Then, the limit behaviour of the normalized dispersion is parameterized by k:
X
X
p?1
(?i ? p?1
?j )2 = ? max(1, p2k?1 ) ,
i
j
where ? is the Landau Theta.
In sequences of models with k ? 0.5 the normalized dispersion is bounded from above and below, as
in model A in the last section. For k > 0.5 the normalized dispersion grows with the dimensionality,
for k = 1 it is linear in p, as in model B.
We make two technical assumptions to rule out degenerate cases. First, we assume that, on average,
additional dimensions make a positive contribution to the mean variance:
Assumption 5 (A5). There exists a constant K3 such that
p?1
p
X
E[(xpi1 )2 ] ? K3 .
i=1
Second, we assume that limits on the relation between second, fourth and eighth moments exist:
Assumption 6 (A6, moment relation). ??4 , ?8 , ?4 and ?8 :
4
E[yi8 ]
?
(1 + ?8 )E2 [yi4 ]
E[yi4 ]
?
(1 + ?4 )E2 [yi2 ]
E[yi8 ]
?
(1 + ?8 )E2 [yi4 ]
E[yi4 ]
?
(1 + ?4 )E2 [yi2 ]
For convenience, we index the sequence of statistical models by p instead of n.
4
Figure 3: Illustration of orthogonal complement shrinkage.
Theoretical results on limit behaviour and consistency We are able to derive a novel theorem
which shows that under these wider assumptions, shrinkage remains consistent:
Theorem 3 (Consistency of Shrinkage). Let A10 , A20 , A30 , A4, A5, A6 hold and
2
?
?
?
m = E (? ? ?)/?
? Then, independently of k,
denote the expected squared relative error of the estimate ?.
lim m = 0.
p??
An unexpected caveat accompanying this result is the limit behaviour of the optimal shrinkage
strength ?? :
Theorem 4 (Limit behaviour). Let A10 , A20 , A30 , A4, A5, A6 hold. Then, there exist 0 < bl <
bu < 1
k ? 0.5
k > 0.5
?n : bl ? ?? ? bu
lim ?? = 0
?
?
p??
The theorem shows that there is a fundamental problem with analytic shrinkage: if k is larger
than 0.5 (all data sets in the last section had k = 1) there is no shrinkage in the limit.
5
Automatic orthogonal complement shrinkage
Orthogonal complement shrinkage To obtain a finite shrinkage strength, we propose an extension of shrinkage we call oc-shrinkage: it leaves the first eigendirection untouched and performs
shrinkage on the orthogonal complement oc of that direction. Figure 3 illustrates this approach. It
shows a three dimensional true covariance matrix with a high dispersion that makes it highly ellipsoidal. The result is a high level of discrepancy between the spherical shrinkage target and the true
covariance. The best convex combination of target and sample covariance will put extremely low
weight on the target. The situation is different in the orthogonal complement of the first eigendirection of the sample covariance matrix: there, the discrepancy between sample covariance and target
is strongly reduced.
To simplify the theoretical analysis, let us consider the case where there is only a single growing
eigenvalue while the remainder stays bounded:
5
Assumption 40 (A40 single large eigenvalue). Let us define
zi = yi ,
2 ? i ? p,
z1 = p?k/2 y1 .
There exist constants Fl and Fu such that Fl ? E[zi8 ] ? Fu
A recent result from Random Matrix Theory [6] allows us to prove that the projection on the empir? oc
ical orthogonal complement oc
b does not affect the consistency of the estimator ?
b:
0
0
0
0
Theorem 5 (consistency of oc-shrinkage). Let A1 , A2 , A3 , A4 , A5, A6 hold. In addition, assume
that 16th moments5 of the yi exist and are bounded. Then, independently of k,
2
?
lim ?oc
= 0,
b ? arg min Qoc
b (?)
p??
?
where Q denotes the mean squared error (MSE) of the convex combination (cmp. eq. (2)).
Automatic model selection Orthogonal complement shrinkage only yields an advantage if the first
eigenvalue is large enough. Starting from eq. (2), we can consistently estimate the error of standard
shrinkage and orthogonal complement shrinkage and only use oc-shrinkage when the difference
b R,oc
?
b is positive. In the supplemental material, we derive a formula of a conservative estimate:
?2 ?
b R,cons.,oc
b b ? m? ?
?
?b
? mE ?
? ?.
b = ?R,oc
?R,oc
c
oc
b E
Usage of m? = 0.45 corresponds to 75% probability of improvement under gaussianity and yields
good results in practice. The second term is relevant in small samples, setting mE = 0.1 is sufficient.
A dataset may have multiple large eigenvalues. It is straightforward to iterate the procedure and thus
automatically select the number of retained eigendirections r?. We call this automatic orthogonal
complement shrinkage. An algorithm listing can be found in the supplemental.
The computational cost of aoc-shrinkage is larger than that of standard shrinkage as it additionally
requires an eigendecomposition O(p3 ) and some matrix multiplications O(?
rp2 ). In the applications
considered here, this additional cost is negligible: r? p and the eigendecomposition can replace
matrix inversions for LDA, QDA or portfolio optimization.
1
0.9
PRIAL
0.8
0.7
0.6
Shrinkage
oc(1)?Shrinkage
oc(2)?Shrinkage
oc(3)?Shrinkage
oc(4)?Shrinkage
aoc?Shrinkage
0.5
0.4
1
10
2
10
dimensionality p
Figure 4: Automatic selection of the number of eigendirections. Average over 100 runs.
6
Empirical validation
Simulations To test the method, we extend model B (eq. (4), section 3) to three signals, Psi = (0.1,
0.25, 0.5). Figure 4 reports the percentage improvement in average loss over the sample covariance
matrix,
EkS ? Ck ? EkCsh/oc?sh/aoc?sh ? Ck
PRIAL Csh/oc?sh/aoc?sh =
,
EkS ? Ck
5
The existence of 16th moments is needed because we bound the estimation error in each direction by the
maximum over all directions, an extremely conservative approximation.
6
Table 1: Portfolio risk. Mean absolute deviations?103 (mean squared deviations?106 ) of the resulting
portfolios for the different covariance estimators and markets. ? := aoc-shrinkage significantly
better than this model at the 5% level, tested by a randomization test.
US
EU
HK
sample covariance
8.56? (156.1? ) 5.93? (78.9? ) 6.57? (81.2? )
standard shrinkage
6.27? (86.4? )
4.43? (46.2? ) 6.32? (76.2? )
?
?
0.09
0.12
0.10
shrinkage to a factor model 5.56? (69.6? )
4.00? (39.1? ) 6.17? (72.9? )
?
?
0.41
0.44
0.42
aoc-shrinkage
5.41 (67.0)
3.83 (36.3)
6.11 (71.8)
?
?
0.75
0.79
0.75
average r?
1.64
1.17
1.41
Table 2: Accuracies for classification tasks on ISOLET and USPS data. ? := significantly better
than all compared methods at the 5% level, tested by a randomization test.
ISOLET
USPS
ntrain
500
2000
5000
500
2000
5000
LDA
75.77%
92.29%
94.1%
72.31%
87.45% 89.56%
LDA (shrinkage) 88.92%
93.25%
94.3%
83.77%
88.37% 89.77%
?
?
?
?
LDA (aoc)
89.69%
93.42%
94.33%
83.95%
88.37% 89.77%
QDA
2.783%
4.882%
14.09%
10.11%
49.45% 72.43%
QDA (shrinkage) 58.57%
75.4%
79.25%
82.2%
88.85% 89.67%
?
QDA (aoc)
59.51%
80.84%
87.35%
83.31%
89.4%
90.07%
of standard shrinkage, oc-shrinkage for one to four eigendirections and aoc-shrinkage.
? and therefore the PRIAL tend to zero in
Standard shrinkage behaves as predicted by Theorem 4: ?
the large n, p limit. The same holds for orders of oc-shrinkage ?oc(1) and oc(2)? lower than the
number of signals, but performance degrades more slowly. For small dimensionalities eigenvalues
are small and therefore there is no advantage for oc-shrinkage. On the contrary, the higher the order
of oc-shrinkage, the larger the error by projecting out spurious large eigenvalues which should have
been subject to regularization. The automatic order selection aoc-shrinkage leads to close to optimal
PRIAL for all dimensionalities.
Real world data I: portfolio optimization Covariance estimates are needed for the minimization
of portfolio risk [7]. Table 1 shows portfolio risk for approximately eight years of daily return data
from 1200 US, 600 European and 100 Hong Kong stocks, aggregated from Reuters tick data [8].
Estimation of covariance matrices is based on short time windows (150 days) because of the data?s
nonstationarity. Despite the unfavorable ratio of observations to dimensionality, standard shrinkage
? the stocks are highly correlated and the spherical target is highly inapprohas very low values of ?:
priate. Shrinkage to a financial factor model incorporating the market factor [9] provides a better
target; it leads to stronger shrinkage and better portfolios. Our proposed aoc-shrinkage yields even
stronger shrinkage and significantly outperforms all compared methods.
Table 3: Accuracies for classification tasks on BCI data. Artificially injected noise in one electrode.
? :=
significantly better than all compared methods at the 5% level, tested by a randomization test.
?noise
0
10
30
100
300
1000
LDA
92.28%
92.28%
92.28%
92.28%
92.28%
92.28%
LDA (shrinkage) 92.39%
92.94%
92.18%
88.04%
82.15%
73.79%
?
?
?
?
?
?
LDA (aoc)
93.27%
93.27%
93.24%
92.88%
93.16%
93.19%
average r?
2.0836
3.0945
3.0891
3.0891
3.0891
3.09
7
0.0932
0.0631
0.2532
0.0466
0.0316
0.1266
0
0
0
?0.0466
?0.0316
?0.1266
?0.0932
?0.0631
?0.2532
Figure 5: High variance components responsible for failure of shrinkage in BCI. ?noise = 10.
Subject 1.
Real world data II: USPS and ISOLET We applied Linear and Quadratic Discriminant Analysis
(LDA and QDA) to hand-written digit recognition (USPS, 1100 observations with 256 pixels for
each of the 10 digits [10]) and spoken letter recognition (ISOLET, 617 features, 7797 recordings of
26 spoken letters [11], obtained from the UCI ML Repository [12]) to assess the quality of standard
and aoc-shrinkage covariance estimates.
Table 2 shows that aoc-shrinkage outperforms standard shrinkage for QDA and LDA on both data
sets for different training set sizes. Only for LDA and large sample sizes on the relatively low
dimensional USPS data, there is no difference between standard and aoc-shrinkage: the automatic
procedure decides that shrinkage on the whole space is optimal.
Real world data III: Brain-Computer-Interface The BCI data was recorded in a study in which
11 subjects had to distinguish between noisy and noise-free phonemes [13, 14]. We applied LDA
on 427 standardized features calculated from event related potentials in 61 electrodes to classify two
conditions: correctly identified noise-free and correctly identified noisy phonemes (ntrain = 1000).
For Table 3, we simulated additive noise in a random electrode (100 repetitions). With and without
noise, our proposed aoc-shrinkage outperforms standard shrinkage LDA. Without noise, r? ? 2 high
variance directions ?probably corresponding to ocular and facial muscle artefacts, depicted to the
left in Figure 5? are left untouched by aoc-shrinkage. With injected noise, the number of directions
increases to r? ? 3, as the procedure detects the additional high variance component ?to the right
in Figure 5? and adapts the shrinkage procedure such that performance remains unaffected. For
standard shrinkage, noise affects the analytic regularization and performance degrades as a result.
7
Discussion
Analytic shrinkage is a fast and accurate alternative to cross-validation which yields comparable
performance, e.g. in prediction tasks and portfolio optimization. This paper has contributed by clarifying the (limited) applicability of the analytic shrinkage formula. In particular we could show that
its assumptions are often violated in practice since real world data has complex structured dependencies. We therefore introduced a set of more general assumptions to shrinkage theory, chosen
such that the appealing consistency properties of analytic shrinkage are preserved. We have shown
that for typcial structure in real world data, strong eigendirections adversely affect shrinkage by
driving the shrinkage strength to zero. Therefore, finally, we have proposed an algorithm which
automatically restricts shrinkage to the orthogonal complement of the strongest eigendirections if
appropriate. This leads to improved robustness and significant performance enhancement in simulations and on real world data from the domains of finance, spoken letter and optical character
recognition, and neuroscience.
Acknowledgments
This work was supported in part by the World Class University Program through the National Research Foundation of Korea funded by the Ministry of Education, Science, and Technology, under
Grant R31-10008. We thank Gilles Blanchard, Duncan Blythe, Thorsten Dickhaus, Irene Winkler
and Anne Porbadnik for valuable comments and discussions.
8
References
[1] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer,
2008.
[2] Charles Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution.
In Proc. 3rd Berkeley Sympos. Math. Statist. Probability, volume 1, pages 197?206, 1956.
[3] Olivier Ledoit and Michael Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365?411, 2004.
[4] Jerome. H. Friedman. Regularized discriminant analysis. Journal of the American Statistical Association,
84(405):165?175, 1989.
[5] Juliane Sch?afer and Korbinian Strimmer. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular
Biology, 4(1):1175?1189, 2005.
[6] Boaz Nadler. Finite sample approximation results for principal component analysis: A matrix perturbation
approach. The Annals of Statistics, 36(6):2791?2817, 2008.
[7] Harry Markowitz. Portfolio selection. Journal of Finance, VII(1):77?91, March 1952.
[8] Daniel Bartz, Kerr Hatrick, Christian W. Hesse, Klaus-Robert M?uller, and Steven Lemm. Directional
Variance Adjustment: Bias reduction in covariance matrices based on factor analysis with an application
to portfolio optimization. PLoS ONE, 8(7):e67503, 07 2013.
[9] Olivier Ledoit and Michael Wolf. Improved estimation of the covariance matrix of stock returns with an
application to portfolio selection. Journal of Empirical Finance, 10:603?621, 2003.
[10] Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 16(5):550?554, May 1994.
[11] Mark A Fanty and Ronald Cole. Spoken letter recognition. In Advances in Neural Information Processing
Systems, volume 3, pages 220?226, 1990.
[12] Kevin Bache and Moshe Lichman. UCI machine learning repository. University of California, Irvine,
School of Information and Computer Sciences, 2013.
[13] Anne Kerstin Porbadnigk, Jan-Niklas Antons, Benjamin Blankertz, Matthias S Treder, Robert Schleicher,
Sebastian M?oller, and Gabriel Curio. Using ERPs for assessing the (sub)conscious perception of noise.
In 32nd Annual Intl Conf. of the IEEE Engineering in Medicine and Biology Society, pages 2690?2693,
2010.
[14] Anne Kerstin Porbadnigk, Matthias S Treder, Benjamin Blankertz, Jan-Niklas Antons, Robert Schleicher,
Sebastian M?oller, Gabriel Curio, and Klaus-Robert M?uller. Single-trial analysis of the neural correlates
of speech quality perception. Journal of neural engineering, 10(5):056003, 2013.
9
| 4921 |@word kong:1 repository:2 version:1 inversion:1 trial:1 stronger:2 nd:1 simulation:2 covariance:37 prial:4 reduction:1 moment:6 configuration:1 lichman:1 daniel:3 outperforms:4 anne:3 written:3 ronald:1 visible:1 additive:1 underly:1 analytic:19 christian:1 drop:1 intelligence:1 leaf:1 ntrain:2 yi1:2 short:1 caveat:1 provides:2 math:1 contribute:1 readability:1 dn:1 sii:1 become:2 prove:4 market:3 expected:3 behavior:1 p1:1 growing:1 brain:2 detects:1 spherical:2 landau:1 automatically:2 window:1 bounded:4 minimizes:1 degrading:1 supplemental:3 spoken:7 berkeley:1 growth:6 finance:4 k2:4 grant:1 yn:2 eigendirection:2 positive:2 negligible:2 engineering:2 limit:12 despite:1 quadruple:1 approximately:1 juliane:1 erps:1 limited:1 range:1 practical:2 responsible:1 acknowledgment:1 practice:4 hesse:1 digit:4 procedure:6 jan:2 asymptotics:1 empirical:3 significantly:4 projection:1 get:1 unfeasible:1 convenience:1 selection:6 close:1 put:1 risk:3 restriction:1 straightforward:1 starting:1 independently:2 convex:4 estimator:9 isolet:6 rule:1 financial:1 annals:1 target:6 play:1 olivier:2 element:1 recognition:6 bache:1 yk1:1 database:1 observed:1 role:2 steven:1 irene:1 eu:1 counter:1 plo:1 valuable:1 benjamin:2 und:1 basis:2 usps:7 stock:5 kolmogorov:1 distinct:1 fast:3 shortcoming:1 klaus:4 sympos:1 kevin:1 larger:3 bci:4 statistic:2 cov:4 winkler:1 ledoit:3 noisy:2 online:1 sequence:7 eigenvalue:22 advantage:2 matthias:2 propose:3 product:1 fanty:1 remainder:1 tu:4 relevant:1 uci:2 degenerate:1 adapts:2 intuitive:1 crossvalidation:1 eigenbasis:3 convergence:1 electrode:3 requirement:1 enhancement:1 inadmissibility:1 assessing:1 intl:1 wider:1 derive:5 illustrate:1 ij:5 school:1 eq:4 strong:1 p2:1 predicted:1 resemble:1 direction:5 artefact:1 closely:1 hull:1 material:2 violates:1 education:1 exchange:1 behaviour:5 alleviate:1 randomization:3 extension:3 hold:7 accompanying:1 considered:1 normal:2 k3:2 nadler:1 driving:1 a2:7 estimation:6 proc:1 r31:1 cole:1 largest:10 repetition:2 minimization:1 uller:2 gaussian:1 aim:1 modified:1 ck:3 pn:3 avoid:1 shrinkage:90 cmp:1 eks:2 xit:1 improvement:2 consistently:1 check:1 hk:1 mueller:1 typically:1 spurious:1 relation:2 ical:1 germany:2 pixel:2 arg:4 classification:2 having:1 biology:2 discrepancy:2 minimized:1 np:3 report:1 simplify:1 markowitz:1 national:1 consisting:1 friedman:2 a5:4 investigate:1 highly:3 sh:4 strimmer:1 implication:1 accurate:1 fu:2 necessary:1 daily:1 korea:3 facial:1 orthogonal:12 indexed:2 qda:6 theoretical:4 a20:3 classify:1 a6:4 cost:3 applicability:2 deviation:2 entry:2 subset:2 uniform:1 dependency:5 xpi1:2 fundamental:1 xpi:2 stay:1 bu:2 systematic:2 michael:2 p2k:1 squared:6 recorded:1 csh:2 slowly:1 bpi:3 corner:1 adversely:1 american:1 usable:1 conf:1 return:2 account:1 potential:1 tii:1 de:2 harry:1 gaussianity:1 blanchard:1 decisive:1 root:1 contribution:1 aoc:18 ass:1 accuracy:2 variance:6 phoneme:2 listing:1 yield:4 directional:1 weak:2 handwritten:1 iid:1 asset:1 unaffected:1 strongest:1 nonstationarity:1 trevor:1 sebastian:2 a10:4 failure:1 ocular:1 e2:4 proof:3 psi:1 con:1 irvine:1 dataset:1 popular:1 lim:5 dimensionality:16 higher:1 day:1 improved:2 strongly:1 implicit:1 correlation:2 jerome:2 hand:3 replacing:1 empir:1 quality:2 lda:13 grows:2 usage:1 facilitate:1 normalized:9 true:3 unbiased:1 regularization:7 psa:2 xks:1 oc:24 hong:1 demonstrate:1 tn:2 performs:1 interface:2 novel:2 recently:1 respectively1:1 a30:4 superior:1 charles:1 behaves:1 functional:1 qp:3 overview:1 volume:2 untouched:2 extend:1 association:1 significant:1 cv:1 automatic:7 rd:1 consistency:14 had:2 portfolio:11 funded:1 afer:1 add:1 multivariate:2 recent:1 moderate:1 yi:2 muller:1 muscle:1 ministry:1 additional:5 impose:1 converge:1 aggregated:1 signal:3 ii:1 multiple:2 technical:1 match:1 faster:1 offer:1 cross:2 molecular:1 a1:1 prediction:1 xjt:1 expectation:1 preserved:1 addition:2 underestimated:1 else:1 sch:1 probably:1 comment:1 subject:3 tend:1 recording:1 contrary:1 sets3:1 integer:1 call:2 iii:1 enough:1 blythe:1 iterate:1 affect:3 fit:1 zi:1 hastie:1 restrict:2 identified:2 render:1 speech:1 york:1 gabriel:2 tij:4 stein:1 ellipsoidal:1 conscious:1 statist:1 reduced:1 exist:4 percentage:1 psb:1 restricts:1 neuroscience:2 correctly:2 tibshirani:1 shall:1 yi8:2 four:2 nevertheless:1 yit:1 year:1 run:1 letter:8 fourth:3 parameterized:1 injected:2 eigendirections:5 place:1 p3:2 duncan:1 comparable:1 bound:2 fl:2 played:1 distinguish:1 treder:2 quadratic:1 annual:1 strength:3 constraint:1 kerstin:2 rp2:1 lemm:1 aspect:1 min:4 extremely:2 optical:2 relatively:1 department:1 structured:1 combination:4 march:1 slightly:1 increasingly:1 character:2 appealing:2 oller:2 projecting:1 sij:8 thorsten:1 remains:2 kerr:1 needed:2 yj1:1 eight:1 observe:1 appropriate:1 alternative:2 yl1:1 robustness:1 existence:1 original:1 denotes:3 standardized:1 a4:4 medicine:1 k1:3 society:1 bl:2 moshe:1 degrades:2 dependence:1 usual:1 diagonal:1 thank:1 berlin:6 simulated:1 clarifying:1 evenly:1 me:2 discriminant:3 index:3 relationship:1 illustration:1 ratio:2 retained:1 robert:7 potentially:1 trace:1 design:2 contributed:1 gilles:1 upper:1 observation:10 dispersion:21 finite:2 subsume:1 situation:1 y1:1 niklas:2 perturbation:1 arbitrary:2 introduced:1 complement:12 required:1 kl:1 z1:1 korbinian:1 california:1 able:1 below:1 pattern:1 ev:5 eighth:4 perception:2 program:1 max:6 power:1 event:1 regularized:1 blankertz:2 improve:1 technology:1 theta:1 genomics:1 text:1 multiplication:1 relative:1 law:1 loss:1 var:6 validation:3 eigendecomposition:2 foundation:1 sufficient:1 consistent:3 uncorrelated:1 pi:3 genetics:1 supported:1 last:2 free:2 bias:3 weaker:1 tick:1 wide:1 absolute:1 sparse:1 dimension:4 xn:3 world:12 calculated:1 yi4:4 transaction:1 correlate:1 boaz:1 ml:1 decides:1 consuming:1 xi:2 spectrum:2 table:6 additionally:1 xjs:1 eeg:2 mse:1 bartz:3 european:1 artificially:1 domain:2 complex:1 sp:3 main:1 dense:1 spread:1 yi2:2 reuters:1 noise:11 arise:1 whole:1 chosing:1 cpi:2 sub:1 third:1 formula:4 theorem:11 decay:1 a3:1 exists:6 essential:1 incorporating:1 curio:2 adding:1 illustrates:1 conditioned:1 vii:1 generalizing:1 yin:1 depicted:1 unexpected:1 adjustment:1 springer:1 wolf:3 corresponds:1 relies:1 identity:1 replace:2 typical:1 averaging:1 principal:1 conservative:2 unfavorable:1 rarely:1 select:1 mark:1 seoul:1 jonathan:1 categorize:1 relevance:1 violated:3 dickhaus:1 tested:4 correlated:2 |
4,334 | 4,922 | Robust Spatial Filtering with Beta Divergence
Wojciech Samek1,4
1
Duncan Blythe1,4
1,2
?
Klaus-Robert Muller
Motoaki Kawanabe3
Machine Learning Group, Berlin Institute of Technology (TU Berlin), Berlin, German
2
Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
3
ATR Brain Information Communication Research Laboratory Group, Kyoto, Japan
4
Bernstein Center for Computational Neuroscience, Berlin, Germany
Abstract
The efficiency of Brain-Computer Interfaces (BCI) largely depends upon a reliable
extraction of informative features from the high-dimensional EEG signal. A crucial step in this protocol is the computation of spatial filters. The Common Spatial
Patterns (CSP) algorithm computes filters that maximize the difference in band
power between two conditions, thus it is tailored to extract the relevant information in motor imagery experiments. However, CSP is highly sensitive to artifacts
in the EEG data, i.e. few outliers may alter the estimate drastically and decrease
classification performance. Inspired by concepts from the field of information geometry we propose a novel approach for robustifying CSP. More precisely, we
formulate CSP as a divergence maximization problem and utilize the property of
a particular type of divergence, namely beta divergence, for robustifying the estimation of spatial filters in the presence of artifacts in the data. We demonstrate the
usefulness of our method on toy data and on EEG recordings from 80 subjects.
1
Introduction
Spatial filtering is a crucial step in the reliable decoding of user intention in Brain-Computer Interfacing (BCI) [1, 2]. It reduces the adverse effects of volume conduction and simplifies the classification problem by increasing the signal-to-noise-ratio. The Common Spatial Patterns (CSP)
[3, 4, 5, 6] method is one of the most widely used algorithms for computing spatial filters in motor imagery experiments. A spatial filter computed with CSP maximizes the differences in band
power between two conditions, thus it aims to enhance detection of the synchronization and desynchronization effects occurring over different locations of the sensorimotor cortex after performing
motor imagery. It is well known that CSP may provide poor results when artifacts are present in
the data or when the data is non-stationary [7, 8]. Note that artifacts in the data are often unavoidable and can not always be removed by preprocessing, e.g. with Independent Component Analysis.
They may be due to eye movements, muscle movements, loose electrodes, sudden changes of attention, circulation, respiration, external events, among the many possibilities. A straight forward way
to robustify CSP against overfitting is to regularize the filters or the covariance matrix estimation
[3, 7, 9, 10, 11]. Several other strategies have been proposed for estimating spatial filters under
non-stationarity [12, 8, 13, 14].
In this work we propose a novel approach for robustifying CSP inspired from recent results in the
field of information geometry [15, 16]. We show that CSP may be formulated as a divergence
maximization problem, in particular we prove by using Cauchy?s interlacing theorem [17] that the
spatial filters found by CSP span a subspace with maximum symmetric Kullback-Leibler divergence
between the distributions of both classes. In order to robustify the CSP algorithm against the influence of outliers we propose solving the divergence maximization problem with a particular type of
1
divergence, namely beta divergence. This divergence has been successfully used for robustifying
algorithms such as Independent Component Analysis (ICA) [18] and Non-negative Matrix Factorization (NMF) [19]. In order to capture artifacts on a trial-by-trial basis we reformulate the CSP
problem as sum of trial-wise divergences and show that our method downweights the influence of
artifactual trials, thus it robustly integrates information from all trials.
The remainder of this paper is organized as follows. Section 2 introduces the divergence-based
framework for CSP. Section 3 describes the beta-divergence CSP method and discusses its robustness property. Section 4 evaluates the method on toy data and EEG recordings from 80 subjects
and interprets the performance improvement. Section 5 concludes the paper with a discussion. An
implementation of our method is available at http://www.divergence-methods.org.
2
Divergence-Based Framework for CSP
Spatial filters computed by the Common Spatial Patterns (CSP) [3, 4, 5] algorithm have been widely
used in Brain-Computer Interfacing as they are well suited to discriminate between distinct motor
imagery patterns. A CSP spatial filter w maximizes the variance of band-pass filtered EEG signals
in one condition while minimizing it in the other condition. Mathematically the CSP solution can
be obtained by solving the generalized eigenvalue problem
?1 wi = ?i ?2 wi
(1)
where ?1 and ?2 are the estimated (average) D ? D covariance matrices of class 1 and 2,
respectively. Note that the spatial filters W = [w1 . . . wD ] can be sorted by importance
?1 = max{?1 , ?11 } > . . . > ?D = max{?D , ?1D }.
2.1
divCSP Algorithm
Information geometry [15] has provided useful frameworks for developing various machine learning
(ML) algorithms, e.g. by optimizing divergences between two different probability distributions [20]
[21]. In particular, a series of robust ML methods have been successfully obtained from Bregman
divergences which are generalization of the Kullback-Leibler (KL) divergence [22]. Among them,
we employ in this work the beta divergence. Before proposing our novel algorithm, we show that
CSP can also be interpreted as maximization of the symmetric KL divergence.
Theorem 1: Let W = [w1 . . . wd ] be the d top (sorted by ?i ) spatial filters computed by CSP and let
? be a d ? D dimensional
?1 and ?2 denote the covariance matrices of class 1 and 2. Let V> = RP
matrix that can be decomposed into a whitening projection P ? RD?D (P(?1 + ?2 )P> = I) and
? ? Rd?D . Then
an orthogonal projection R
span(W) = span(V? )
(2)
?
>
>
?
with V = argmax Dkl V ?1 V || V ?2 V
(3)
V
? kl (? || ?) denotes the symmetric Kullback-Leibler Divergence1 between zero mean Gauswhere D
sians and span(M) stands for the subspace spanned by the columns of matrix M. Note that [23]
has provided a proof for the special case of one spatial filter, i.e. for V ? RD?1 .
Proof: See appendix and supplement material.
The objective function that is maximized in Eq. (3) can be written as
1
1
Lkl (V) =
tr (V> ?1 V)?1 (V> ?2 V) + tr (V> ?2 V)?1 (V> ?1 V) ? d. (4)
2
2
In order to cater for artifacts on a trial-by-trial basis we need to reformulate the above objective
function. Instead of maximizing the divergence between the average class distributions we propose
to optimize the sum of trial-wise divergences
Lsumkl (V)
=
N
X
? kl V> ?i V || V> ?i V ,
D
1
2
i=1
1
The symmetric Kullback-Leibler Divergence between distributions f (x) and g(x) is defined as
R
R
(x)
?
Dkl (f (x) || g(x)) = f (x) ? log fg(x)
dx + g(x) ? log fg(x)
dx.
(x)
2
(5)
where ?i1 and ?i2 denote the covariance matrices estimated from the i-th trial of class 1 and class
2, respectively, and N is the number of trials per class. Note that the reformulated problem is
not equivalent to CSP; in Eq. (4) averaging is performed w.r.t. the covariance matrices, whereas in
Eq. (5) it is performed w.r.t. the divergences. We denote the former approach by kl-divCSP and the
latter one by sumkl-divCSP. The following theorem relates both approaches in the asymptotic case.
Theorem 2: Suppose that the number of discriminative sources is one; then let c be such that
D/n ? c as D, n ? ? (D dimensions, n data points per trial). Then if there exists ?(c) with
N/D ? ?(c) for N ? ? (N the number of trials) then the empirical maximizer of Lsumkl (v)
(and of course also of Lkl (v)) converges almost surely to the true solution.
Sketched Proof: See appendix.
Thus Theorem 2 says that both divergence-based CSP variants kl-divCSP and sumkl-divCSP almost
surely converge to the same (true) solution in the asymptotic case. The theorem can be easily
extended to multiple discriminative sources.
2.2
Optimization Framework
We use the methods developed in [24], [25] and [26] for solving the maximization problems in
Eq. (4) and Eq. (5). The projection V ? RD?d to the d-dimensional subspace can be decomposed
into three parts, namely V> = Id RP where Id is an identity matrix truncated to the first d rows, R
is a rotation matrix with RR> = I and P is a whitening matrix. The optimization process consists
of finding the rotation R that maximizes our objective function and can be performed by gradient
descent on the manifold of orthogonal matrices. More precisely, we start with an orthogonal matrix
R0 and find an orthogonal update U in the k-th step such that Rk+1 = URk . The update matrix
is chosen by identifying the direction of steepest descent in the set of orthogonal transformations
and then performing a line search along this direction to find the optimal step. Since the basis of
the extracted subspace is arbitrary (one can right multiply a rotation matrix to V without changing
the divergence), we select the principal axes of the data distribution of one class (after projection)
as basis in order to maximally separate the two classes. The optimization process is summarized in
Algorithm 1 and explained in the supplement material of the paper.
Algorithm 1 Divergence-based Framework for CSP
1: function DIV CSP(?1 , ?2 , d)
1
2:
Compute the whitening matrix P = ?? 2
3:
Initialise R0 with a random rotation matrix
4:
Whiten and rotate the data ?c = (R0 P)?c (R0 P)> with c = {1, 2}
5:
repeat
6:
Compute the gradient matrix and determine the step size (see supplement material)
7:
Update the rotation matrix Rk+1 = URk
8:
Apply the rotation to the data ?c = U?c U>
9:
until convergence
10:
Let V> = Id Rk+1 P
11:
Rotate V by G ? Rd?d where G are eigenvectors of V> ?1 V
12:
return V
13: end function
3
Beta Divergence CSP
Robustness is a desirable property of algorithms that work in data setups which are known to be
contaminated by outliers. For example, in the biomedical fields, signals such as EEG may be highly
affected by artifacts, i.e. outliers, which may drastically influence statistical estimation. Note that
both of the above approaches kl-divCSP and sumkl-divCSP are not robust w.r.t. artifacts as they
both perform simple (non-robust) averaging of the covariance matrices and of the divergence terms,
respectively. In this section we show that by using beta divergence we robustify the averaging of the
divergence terms as beta divergence downweights the influence of outlier trials.
3
Beta divergence was proposed in [16, 27] and is defined (for ? > 0) as
Z
Z
1
1
?
?
D? (f (x) || g(x)) =
(f (x) ? g (x))f (x)dx ?
(f ?+1 (x) ? g ?+1 (x))dx,
?
?+1
(6)
where f (x) and g(x) are two probability distributions. Like every statistical divergence it is
always positive and equals zero iff g = f [15]. The symmetric version of beta divergence
? ? (f (x) || g(x)) = D? (f (x) || g(x)) + D? (g(x) || f (x)) can be interpreted as discrepancy
D
between two probability distributions. One can show easily that beta and Kullback-Leibler divergence coincide as ? ? 0.
In the context of parameter estimation, one can show that minimizing the divergence function from
an empirical distribution p to the statistical model q(?) is equivalent to maximizing the ?-likelihood
? ? (?)
L
?
? ? (q(?))
argmin D? (p || q(?)) = argmax L
?
q(?)
(7)
q(?)
n
X
exp(?z) ? 1
? ? (q(?)) = 1
with L
?? (`(xi , q(?))) ? b?? (?) and ?? (z) =
,
?
n i=1
?
(8)
where `(xi R, q(?)) denotes the log-likelihood of observation xi and distribution q(?), and b?? (?) :=
(? + 1)?1 q(?)?+1 dx. Basu et al. [27] showed that the ?-likelihood method weights each observation according to the magnitude of likelihood evaluated at the observation; if an observation is an
outlier, i.e. of lower likelihood, then it is downweighted. Thus, beta divergence allows to construct
robust estimators as samples with low likelihood are downweighted (see also M-estimators [28]).
?-divCSP Algorithm
We propose applying beta divergence to the objective function in Eq. (5) in order to downweight the
influence of artifacts in the computation of spatial filters. An overview over the different divergencebased CSP variants is provided in Figure 1. The objective function of our ?-divCSP approach is
X
? ? VT ?i1 V || VT ?i2 V
L? (V) =
D
(9)
i
=
Z
Z
Z
Z
1X
?+1
?+1
?
?
gi dx + fi dx ? fi gi dx ? fi gi dx ,
? i
(10)
? i and gi = N 0, ?
? i being the zero-mean Gaussian distributions with prowith fi = N 0, ?
1
2
? i = VT ?i V ? Rd?d and ?
? i = VT ?i V ? Rd?d , respectively.
jected covariances ?
1
1
2
2
One can show easily (see the supplement file to this paper) that L? (V) has an explicit form
X
? i |? ?2 + |?
? i |? ?2 ? (? + 1) d2 |?
? i | 1??
?i + ?
? i |? 12 + |?
? i | 1??
?i + ?
? i |? 12
2 |? ?
2 |? ?
?
|?
, (11)
1
2
2
1
2
1
2
1
i
q
with ? = ?1 (2?)?d1(?+1)d . We use Algorithm 1 to maximize the objective function of ?-divCSP.
In the following we show that the robustness property of ?-divCSP can be directly understood from
inspection of its objective function.
? i and ?
? i are full rank d ? d covariance matrices. We investigate the behaviour of the
Assume ?
1
2
? i is constant and ?
? i becomes very
objective functions of ?-divCSP and sumkl-divCSP when ?
1
2
large, e.g. because it is affected by artifacts. It is not hard to see that for ? > 0 the objective
? i becomes arbitrarily large. The first term
function L? does not go to infinity but is constant as ?
2
?
? i |? 2 is constant with respect to changes of ?
? i and all the other terms
of the objective function |?
1
2
? i increases. Thus the influence function of the ?-divCSP estimator is bounded w.r.t.
go to zero as ?
2
? i (the same argument holds for changes of ?
? i ). Note that this robustness property
changes in ?
2
1
? i )?1 ?
? i is
vanishes when applying Kullback-Leibler divergences Eq. (4) as the trace term tr (?
1
2
? i becomes arbitrarily large, thus this artifactual term will dominate the solution.
not bounded when ?
2
4
CSP
robust
sum
-divCSP
-divCSP
-divCSP
Figure 1: Relation between the different CSP formulations outlined in this paper.
4
4.1
Experimental Evaluation
Simulations
In order to investigate the effects of artifactual trials on CSP and ?-divCSP we generate data x(t)
using the following mixture model
dis
s (t)
x(t) = A ndis
+ ,
(12)
s
(t)
where A ? R10?10 is a random orthogonal mixing matrix, sdis is a discriminative source sampled
from a zero mean Gaussian with variance 1.8 in one condition and 0.2 in the other one, sndis are 9
sources with variance 1 in both conditions and is a noise variable with variance 2. We generate
100 trials per condition, each consisting of 200 data points. Furthermore we randomly add artifacts
with variance 10 independently to each data dimension (i.e. virtual electrode) and trial with varying
probability and evaluate the angle between the true filter extracting the source activity of sdis and
the spatial filter computed by CSP and ?-divCSP. The median angles are shown in Figure 2 using
100 repetitions. One can clearly see that the angle error between the spatial filter extracted by CSP
and the true one increases with larger artifact probability. Furthermore one can see from the figure
that using very small ? values does not attenuate the artefact problem, but it rather increases the
error by adding up trial-wise divergences without downweighting outliers. However, as the ? value
increases the artifactual trials are downweighted and a robust average is computed over the trial-wise
divergence terms. This increased robustness significantly reduces the angle error.
prob. of outlier 0
prob. of outlier 0.001
40
80
angle error (in ?)
60
60
40
CSP
?-divCSP
60
40
60
40
80
angle error (in ?)
angle error (in ?)
80
60
40
60
40
0.75
1
0.5
0.25
0.1
0.01
0.001
beta value
2
1.5
0.75
1
0.5
0.25
0.1
0.01
0.001
2
1.5
0.75
1
0
0.5
0
0.25
0
0.1
20
0.01
20
0.001
20
beta value
2
beta value
prob. of outlier 0.05
1.5
beta value
prob. of outlier 0.02
2
beta value
prob. of outlier 0.01
1.5
1
0.75
0.5
0.25
0.1
0.01
0.001
2
1.5
1
0.75
0.5
0.25
0.1
0.01
0.001
2
1.5
1
0.75
0
0.5
0
0.25
0
0.1
20
0.01
20
0.001
20
80
angle error (in ?)
prob. of outlier 0.005
80
angle error (in ?)
angle error (in ?)
80
beta value
Figure 2: Angle between the true spatial filter and the filter computed by CSP and ?-divCSP for
different probabilities of artifacts. The robustness of our approach increases with the ? value and
significantly outperforms the CSP solution.
5
4.2
Data Sets and Experimental Setup
The data set [29] used for the evaluation contains EEG recordings from 80 healthy BCIinexperienced volunteers performing motor imagery tasks with the left and right hand or feet. The
subjects performed motor imagery first in a calibration session and then in a feedback mode in which
they were required to control a 1D cursor application. Activity was recorded from the scalp with
multi-channel EEG amplifiers using 119 Ag/AgCl electrodes in an extended 10-20 system sampled
at 1000 Hz (downsampled to 100 Hz) and a band-pass from 0.05 to 200 Hz. Three runs with 25
trials of each motor condition were recorded in the calibration session and the two best classes were
selected; the subjects performed feedback with three runs of 100 trials. Both sessions were recorded
on the same day.
For the offline analysis we manually select 62 electrodes densely covering the motor cortex, extract
a time segment located from 750ms to 3500ms after the cue indicating the motor imagery class and
filter the signal in 8-30 Hz using a 5-th order Butterworth filter. We do not apply manual or automatic
rejection of trials or electrodes and use six spatial filters for feature extraction. For classification
we apply Linear Discriminant Analysis (LDA) after computing the logarithm of the variance on
the spatially filtered data. We measure performance as misclassification rate and normalize the
covariance matrices by dividing them by their traces. The parameter ? is selected from the set of
15 candidates {0, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, 0.5, 0.75, 1, 1.5, 2, 5} by 5-fold
cross-validation on the calibration data using minimal training error rate as selection criterion. For
faster convergence we use the rotation part of the CSP solution as initial rotation matrix R0 .
4.3
Results
We compare our ?-divCSP method with three CSP baselines using different estimators for the covariance matrices. The first baseline uses the standard empirical estimator, the second one applies a
standard analytic shrinkage estimator [9] and the third one relies on the minimum covariance determinant (MCDE) estimate [30]. Note that the shrinkage estimator usually provides better estimates
in small-sample settings, whereas MCDE is robust to outliers. In order to perform a fair comparison
we applied MCDE over various ranges [0, 0.05, 0.1 . . . 0.5] of parameters and selected the best one
by cross-validation (as with ?-divCSP). The MCDE parameter determines the expected proportion
of artifacts in the data. The results are shown in Figure 3. Each circle denotes the error rate of one
subject. One can see that the ?-divCSP method outperforms the baselines as most circles are below the solid line. Furthermore the performance increases are significant according to the one-sided
Wilcoxon sign rank test as the p-values are smaller than 0.05.
40
20
60
?-divCSP error rate [%]
60
?-divCSP error rate [%]
?-divCSP error rate [%]
60
40
20
p = 0.0005
0
0
20
40
CSP error rate [%]
40
20
p = 0.0178
60
0
0
20
40
shrinkCSP error rate [%]
p = 0.0407
60
0
0
20
40
MCDE+CSP error rate [%]
60
Figure 3: Performance results of the CSP, shrinkage + CSP and MCDE + CSP baselines compared to
?-divCSP. Each circle represents the error rate of one subject. Our method outperforms the baselines
for circles that are below the solid line. The p-values of the one-sided Wilcoxon sign rank test are
shown in the lower right corner.
We made an interesting observation when analysing the subject with largest improvement over the
CSP baseline; the error rates were 48.6% (CSP), 48.6% (MCDE+CSP) and 11.0% (?-divCSP).
Over all ranges of MCDE parameters this subject has an error rate higher than 48% i.e. MCDE
was not able help in this case. This example shows that ?-divCSP and MCDE are not equivalent.
Enforcing robustness on the CSP algorithm may in some cases be better than enforcing robustness
when estimating the covariance matrices.
6
In the following we study the robustness property of the ?-divCSP method on subject 74, the user
with the largest improvement (CSP error rate: 48.6 % and ?-divCSP error rate: 11.0 %). The left
panel of Figure 4 displays the activity pattern associated with the most important CSP filter of subject
74. One can clearly see that the pattern does not encode neurophysiologically relevant activity,
but focuses on a single electrode, namely FFC6. When analysing the (filtered) EEG signal of this
electrode one can identify a strong artifact in one of the trials. Since neither the empirical covariance
estimator nor the CSP algorithm is robust to this kind of outliers, it dominates the solution. However,
the resulting pattern is meaningless as it does not capture motor imaginary related activity. The right
panel of Figure 4 shows the relative importance of the divergence term of the artifactual trial with
respect to the average divergence terms of the other trials. One can see that the divergence term
computed from the artifactual trial is over 1800 times larger than the average of the other trials. This
ratio decreases rapidly for larger ? values, thus the influence of the artifact decreases. Thus, our
experiments provide an excellent example of the robustness property of the ?-divCSP approach.
2000
1800
artefact in FFC6
Percentage of artefact term
1600
1400
1200
1000
800
600
400
200
5
1.5
2
0.75
1
0.25
0.5
0.15
0.2
0.05
0.01
0.1
0.001
0
CSP pattern
0.0001
0
beta value
Figure 4: Left: The CSP pattern of subject 74 does not reflect neurophysiological activity but it represents the artifact (red ellipse) in electrode FFC6. Right: The relative importance of this artifactual
trial decreases with the ? parameters. The relative importance is measured as quotient between the
divergence term of the artifactual trial and the average divergence terms of the other trials.
5
Discussion
Analysis of EEG data is challenging because the signal of interest is typically present with a low
signal to noise ratio. Moreover artifacts and non-stationarity require robust algorithms. This paper
has placed its focus on a robust estimation and proposed a novel algorithm family giving rise to a beta
divergence algorithm which allows robust spatial filter computation for BCI. In the very common
setting where EEG electrodes become loose or movement related artifacts occur in some trials, it
is a practical necessity to either ignore these trials (which reduces an already small sample size
further) or to enforce intrinsic invariance to these disturbances into the learning procedures. Here,
we have used CSP, the standard filtering technique in BCI, as a starting point and reformulated it
in terms of an optimization problem maximizing the divergence between the class-distributions that
correspond to two cognitive states. By borrowing the concept of beta divergences, we could adapt
the optimization problem and arrive at a robust spatial filter computation based on CSP. We showed
that our novel method can reduce the influence of artifacts in the data significantly and thus allows
to robustly extract relevant filters for BCI applications.
In future work we will investigate the properties of other divergences for Brain-Computer Interfacing
and consider also further applications like ERP-based BCIs [31] and beyond the neurosciences.
Acknowledgment We thank Daniel Bartz and Frank C. Meinecke for valuable discussions. This
work was supported by the German Research Foundation (GRK 1589/1), by the Federal Ministry
of Education and Research (BMBF) under the project Adaptive BCI (FKZ 01GQ1115) and by the
Brain Korea 21 Plus Program through the National Research Foundation of Korea funded by the
Ministry of Education.
7
Appendix
Sketch of proof of Theorem 1
Cauchy?s interlacing theorem [17] establishes a relation between the eigenvalues ?1 ? . . . ? ?D of
the original covariance matrix ? and the eigenvalues ?1 ? . . . ? ?d of the projected one V?V> .
The theorem says that
?j ? ?j ? ?D?d+j .
In the proof we split the optimal projection V? into two parts U1 ? Rk?D and U2 ? Rd?k?D based
on whether the first or second trace term in Eq. (4) is larger when applying the spatial filters. By
using Cauchy?s theorem we then show that Lkl (U) ? Lkl (W) where W consists of k eigenvectors
with largest eigenvalues; equality only holds if U and W coincide (up to linear transformations).
We show an analogous relation for U2 and conclude that V? must be the CSP solution (up to linear
transformations). See the full proof in the supplement material.
Sketch of the proof of Theorem 2
Since there is only one discriminative direction we may perform analysis in a basis whereby the
covariances of both classes have the form diag(a, 1, . . . , 1) and diag(b, 1, . . . , 1). If we show in this
basis that consistency holds then it is a simple matter to prove consistency in the original basis. We
want to show that as the number of trials N increases the filter provided by sumkl-divCSP converges
to the true solution v? . If the support of the density of the eigenvalues includes a region around 0,
then there is no hope of showing that the matrix inversion is stable. However, it has been shown
in the random matrix theory literature [32] that if D and n tend to ? in a ratio c = D
n then all of
?
?
the eigenvalues apart from the largest lie between (1 ? c)2 and (1 + c)2 whereas the largest
?
sample eigenvalue (? denotes the true non-unit eigenvalue) converges almost surely to ? + c ??1
?
provided ? > 1 + c, independently of the distribution of the data; a similar result applies if one
true eigenvalue is smaller than the rest. This implies that for sufficient discriminability in the true
distribution and sufficiently many data points per trial, each filter maximizing each term in the sum
has non-zero dot-product with the true maximizing filter. But since the trials are independent, this
implies that in the limit of N trials the maximizing filter corresponds to the true filter. Note that the
full proof goes well beyond the scope of this contribution.
References
[1] G. Dornhege, J. del R. Mill?an, T. Hinterberger, D. McFarland, and K.-R. M?uller, Eds., Toward
Brain-Computer Interfacing. Cambridge, MA: MIT Press, 2007.
[2] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Braincomputer interfaces for communication and control,? Clin. Neurophysiol., vol. 113, no. 6, pp.
767?791, 2002.
[3] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. M?uller, ?Optimizing Spatial
filters for Robust EEG Single-Trial Analysis,? IEEE Signal Proc. Magazine, vol. 25, no. 1, pp.
41?56, 2008.
[4] H. Ramoser, J. M?uller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial
eeg during imagined hand movement,? IEEE Trans. Rehab. Eng., vol. 8, no. 4, pp. 441?446,
1998.
[5] L. C. Parra, C. D. Spence, A. D. Gerson, and P. Sajda, ?Recipes for the linear analysis of eeg,?
NeuroImage, vol. 28, pp. 326?341, 2005.
[6] S. Lemm, B. Blankertz, T. Dickhaus, and K.-R. M?uller, ?Introduction to machine learning for
brain imaging,? NeuroImage, vol. 56, no. 2, pp. 387?399, 2011.
[7] F. Lotte and C. Guan, ?Regularizing common spatial patterns to improve bci designs: Unified
theory and new algorithms,? IEEE Trans. Biomed. Eng., vol. 58, no. 2, pp. 355 ?362, 2011.
[8] W. Samek, C. Vidaurre, K.-R. M?uller, and M. Kawanabe, ?Stationary common spatial patterns
for brain-computer interfacing,? Journal of Neural Engineering, vol. 9, no. 2, p. 026013, 2012.
[9] O. Ledoit and M. Wolf, ?A well-conditioned estimator for large-dimensional covariance matrices,? Journal of Multivariate Analysis, vol. 88, no. 2, pp. 365 ? 411, 2004.
8
[10] H. Lu, H.-L. Eng, C. Guan, K. Plataniotis, and A. Venetsanopoulos, ?Regularized common
spatial pattern with aggregation for eeg classification in small-sample setting,? IEEE Transactions on Biomedical Engineering, vol. 57, no. 12, pp. 2936?2946, 2010.
[11] D. Devlaminck, B. Wyns, M. Grosse-Wentrup, G. Otte, and P. Santens, ?Multi-subject learning
for common spatial patterns in motor-imagery bci,? Computational Intelligence and Neuroscience, vol. 2011, no. 217987, pp. 1?9, 2011.
[12] B. Blankertz, M. K. R. Tomioka, F. U. Hohlefeld, V. Nikulin, and K.-R. M?uller, ?Invariant
common spatial patterns: Alleviating nonstationarities in brain-computer interfacing,? in Ad.
in NIPS 20, 2008, pp. 113?120.
[13] W. Samek, F. C. Meinecke, and K.-R. M?uller, ?Transferring subspaces between subjects in
brain-computer interfacing,? IEEE Transactions on Biomedical Engineering, vol. 60, no. 8,
pp. 2289?2298, 2013.
[14] M. Arvaneh, C. Guan, K. K. Ang, and C. Quek, ?Optimizing spatial filters by minimizing
within-class dissimilarities in electroencephalogram-based brain-computer interface,? IEEE
Trans. Neural Netw. Learn. Syst., vol. 24, no. 4, pp. 610?619, 2013.
[15] S. Amari, H. Nagaoka, and D. Harada, Methods of information geometry. American Mathematical Society, 2000.
[16] S. Eguchi and Y. Kano, ?Robustifying maximum likelihood estimation,? Tokyo Institute of
Statistical Mathematics, Tokyo, Japan, Tech. Rep, 2001.
[17] R. Bhatia, Matrix analysis, ser. Graduate Texts in Mathematics. Springer, 1997, vol. 169.
[18] M. Mihoko and S. Eguchi, ?Robust blind source separation by beta divergence,? Neural Comput., vol. 14, no. 8, pp. 1859?1886, Aug. 2002.
[19] C. F?evotte and J. Idier, ?Algorithms for nonnegative matrix factorization with the βdivergence,? Neural Comput., vol. 23, no. 9, pp. 2421?2456, Sep. 2011.
[20] A. Hyv?arinen, ?Survey on independent component analysis,? Neural Computing Surveys,
vol. 2, pp. 94?128, 1999.
[21] M. Kawanabe, W. Samek, P. von B?unau, and F. Meinecke, ?An information geometrical view
of stationary subspace analysis,? in Artificial Neural Networks and Machine Learning - ICANN
2011, ser. LNCS. Springer Berlin / Heidelberg, 2011, vol. 6792, pp. 397?404.
[22] N. Murata, T. Takenouchi, and T. Kanamori, ?Information geometry of u-boost and bregman
divergence,? Neural Computation, vol. 16, pp. 1437?1481, 2004.
[23] H. Wang, ?Harmonic mean of kullbackleibler divergences for optimizing multi-class eeg
spatio-temporal filters,? Neural Processing Letters, vol. 36, no. 2, pp. 161?171, 2012.
[24] P. von B?unau, F. C. Meinecke, F. C. Kir?aly, and K.-R. M?uller, ?Finding Stationary Subspaces
in Multivariate Time Series,? Physical Review Letters, vol. 103, no. 21, pp. 214 101+, 2009.
[25] P. von B?unau, ?Stationary subspace analysis - towards understanding non-stationary data,?
Ph.D. dissertation, Technische Universit?at Berlin, 2012.
[26] W. Samek, M. Kawanabe, and K.-R. M?uller, ?Divergence-based framework for common spatial patterns algorithms,? IEEE Reviews in Biomedical Engineering, 2014, in press.
[27] A. Basu, I. R. Harris, N. L. Hjort, and M. C. Jones, ?Robust and efficient estimation by minimising a density power divergence,? Biometrika, vol. 85, no. 3, pp. 549?559, 1998.
[28] P. J. Huber, Robust Statistics, ser. Wiley Series in Probability and Statistics.
WileyInterscience, 1981.
[29] B. Blankertz, C. Sannelli, S. Halder, E. M. Hammer, A. K?ubler, K.-R. M?uller, G. Curio,
and T. Dickhaus, ?Neurophysiological predictor of smr-based bci performance,? NeuroImage,
vol. 51, no. 4, pp. 1303?1309, 2010.
[30] P. J. Rousseeuw and K. V. Driessen, ?A fast algorithm for the minimum covariance determinant
estimator,? Technometrics, vol. 41, no. 3, pp. 212?223, 1999.
[31] B. Blankertz, S. Lemm, M. S. Treder, S. Haufe, and K.-R. M?uller, ?Single-trial analysis and
classification of ERP components ? a tutorial,? NeuroImage, vol. 56, no. 2, pp. 814?825, 2011.
[32] J. Baik and J. Silverstein, ?Eigenvalues of large sample covariance matrices of spiked population models,? Journal of Multivariate Analysis, vol. 97, no. 6, pp. 1382?1408, 2006.
9
| 4922 |@word trial:39 determinant:2 version:1 inversion:1 proportion:1 d2:1 hyv:1 simulation:1 grk:1 covariance:18 eng:3 tr:3 solid:2 necessity:1 initial:1 series:3 contains:1 daniel:1 outperforms:3 imaginary:1 wd:2 dx:9 written:1 must:1 informative:1 analytic:1 motor:11 update:3 stationary:6 cue:1 selected:3 intelligence:1 inspection:1 steepest:1 dissertation:1 filtered:3 sudden:1 provides:1 location:1 org:1 mathematical:1 along:1 beta:23 become:1 agcl:1 prove:2 consists:2 huber:1 expected:1 ica:1 nor:1 multi:3 brain:13 inspired:2 decomposed:2 increasing:1 becomes:3 provided:5 estimating:2 bounded:2 moreover:1 maximizes:3 panel:2 project:1 argmin:1 interpreted:2 kind:1 developed:1 proposing:1 unified:1 finding:2 transformation:3 ag:1 dornhege:1 temporal:1 every:1 universit:1 biometrika:1 ser:3 control:2 unit:1 before:1 positive:1 engineering:5 understood:1 limit:1 id:3 plus:1 discriminability:1 challenging:1 factorization:2 range:2 graduate:1 practical:1 acknowledgment:1 spence:1 wolpaw:1 procedure:1 lncs:1 empirical:4 significantly:3 projection:5 intention:1 downsampled:1 selection:1 context:1 influence:8 applying:3 vaughan:1 www:1 optimize:1 equivalent:3 center:1 maximizing:6 go:3 attention:1 starting:1 independently:2 otte:1 survey:2 formulate:1 identifying:1 estimator:10 d1:1 spanned:1 regularize:1 dominate:1 initialise:1 population:1 analogous:1 suppose:1 user:2 magazine:1 alleviating:1 us:1 located:1 wang:1 capture:2 region:1 wentrup:1 decrease:4 removed:1 movement:4 valuable:1 vanishes:1 solving:3 segment:1 upon:1 efficiency:1 basis:7 neurophysiol:1 easily:3 sep:1 various:2 sajda:1 distinct:1 fast:1 artificial:1 bhatia:1 klaus:1 widely:2 larger:4 say:2 samek:4 amari:1 bci:9 statistic:2 gi:4 nagaoka:1 ledoit:1 eigenvalue:10 rr:1 propose:5 nikulin:1 product:1 remainder:1 tu:1 relevant:3 rehab:1 rapidly:1 iff:1 mixing:1 normalize:1 recipe:1 convergence:2 electrode:9 converges:3 help:1 measured:1 aug:1 eq:8 dividing:1 strong:1 quotient:1 implies:2 motoaki:1 direction:3 artefact:3 foot:1 tokyo:2 hammer:1 filter:35 material:4 virtual:1 education:2 require:1 arinen:1 behaviour:1 generalization:1 parra:1 mathematically:1 hold:3 lkl:4 around:1 sufficiently:1 vidaurre:1 exp:1 scope:1 gerson:1 estimation:7 proc:1 integrates:1 healthy:1 sensitive:1 largest:5 repetition:1 successfully:2 establishes:1 hope:1 butterworth:1 mit:1 federal:1 clearly:2 interfacing:7 always:2 gaussian:2 aim:1 csp:54 rather:1 shrinkage:3 varying:1 encode:1 ax:1 focus:2 venetsanopoulos:1 evotte:1 improvement:3 rank:3 likelihood:7 ubler:1 tech:1 baseline:6 typically:1 transferring:1 borrowing:1 relation:3 i1:2 germany:1 biomed:1 sketched:1 classification:5 among:2 spatial:33 special:1 field:3 construct:1 equal:1 extraction:2 manually:1 represents:2 jones:1 alter:1 discrepancy:1 future:1 contaminated:1 few:1 employ:1 randomly:1 divergence:57 densely:1 national:1 geometry:5 argmax:2 consisting:1 technometrics:1 amplifier:1 detection:1 stationarity:2 interest:1 highly:2 possibility:1 multiply:1 investigate:3 evaluation:2 smr:1 introduces:1 mixture:1 bregman:2 korea:4 orthogonal:6 logarithm:1 circle:4 minimal:1 increased:1 column:1 maximization:5 technische:1 predictor:1 usefulness:1 harada:1 kullbackleibler:1 conduction:1 gerking:1 density:2 meinecke:4 decoding:1 enhance:1 w1:2 imagery:8 reflect:1 unavoidable:1 recorded:3 von:3 hinterberger:1 cognitive:2 external:1 corner:1 american:1 wojciech:1 return:1 toy:2 japan:2 downweighted:3 syst:1 summarized:1 includes:1 matter:1 depends:1 ad:1 blind:1 performed:5 view:1 red:1 start:1 aggregation:1 contribution:1 circulation:1 variance:6 largely:1 maximized:1 correspond:1 identify:1 murata:1 silverstein:1 lu:1 straight:1 manual:1 ed:1 against:2 evaluates:1 sensorimotor:1 pp:24 ndis:1 proof:8 associated:1 sampled:2 organized:1 higher:1 urk:2 day:1 maximally:1 formulation:1 evaluated:1 furthermore:3 biomedical:4 robustify:3 until:1 hand:2 sketch:2 maximizer:1 del:1 mode:1 artifact:20 lda:1 bcis:1 effect:3 concept:2 true:11 former:1 equality:1 spatially:1 symmetric:5 laboratory:1 leibler:6 i2:2 during:1 covering:1 whereby:1 whiten:1 m:2 generalized:1 criterion:1 demonstrate:1 electroencephalogram:1 interface:3 geometrical:1 wise:4 harmonic:1 novel:5 fi:4 common:10 rotation:8 physical:1 overview:1 birbaumer:1 volume:1 imagined:1 significant:1 respiration:1 cambridge:1 attenuate:1 rd:8 automatic:1 outlined:1 consistency:2 session:3 mathematics:2 funded:1 dot:1 calibration:3 stable:1 cortex:2 whitening:3 add:1 wilcoxon:2 multivariate:3 recent:1 showed:2 optimizing:4 apart:1 rep:1 cater:1 arbitrarily:2 vt:4 muller:1 muscle:1 minimum:2 ministry:2 r0:5 surely:3 converge:1 maximize:2 determine:1 unau:3 signal:9 relates:1 multiple:1 desirable:1 full:3 kyoto:1 reduces:3 interlacing:2 faster:1 adapt:1 downweights:2 cross:2 minimising:1 nonstationarities:1 dkl:2 variant:2 volunteer:1 tailored:1 whereas:3 want:1 median:1 source:6 crucial:2 meaningless:1 rest:1 file:1 recording:3 subject:13 hz:4 tend:1 extracting:1 presence:1 hjort:1 bernstein:1 split:1 haufe:1 baik:1 lotte:1 fkz:1 interprets:1 reduce:1 simplifies:1 whether:1 six:1 reformulated:2 useful:1 eigenvectors:2 rousseeuw:1 ang:1 band:4 ph:1 http:1 generate:2 percentage:1 tutorial:1 driessen:1 sign:2 neuroscience:3 estimated:2 per:4 vol:25 affected:2 group:2 downweighting:1 changing:1 erp:2 neither:1 r10:1 utilize:1 imaging:1 uller:11 sum:4 run:2 angle:11 prob:6 letter:2 arrive:1 almost:3 family:1 separation:1 duncan:1 appendix:3 display:1 fold:1 treder:1 nonnegative:1 activity:6 scalp:1 occur:1 precisely:2 infinity:1 lemm:3 u1:1 robustifying:5 span:4 argument:1 performing:3 downweight:1 hohlefeld:1 department:1 developing:1 according:2 poor:1 kano:1 describes:1 smaller:2 wi:2 outlier:15 explained:1 invariant:1 spiked:1 sided:2 sannelli:1 discus:1 german:2 loose:2 end:1 jected:1 available:1 apply:3 kawanabe:4 enforce:1 robustly:2 robustness:10 rp:2 original:2 top:1 denotes:4 clin:1 giving:1 ellipse:1 society:1 objective:10 already:1 strategy:1 div:1 gradient:2 subspace:8 separate:1 thank:1 berlin:6 atr:1 manifold:1 cauchy:3 discriminant:1 toward:1 enforcing:2 reformulate:2 ratio:4 minimizing:3 setup:2 robert:1 frank:1 trace:3 negative:1 rise:1 implementation:1 design:1 kir:1 perform:3 observation:5 descent:2 truncated:1 extended:2 communication:2 arbitrary:1 nmf:1 aly:1 namely:4 required:1 kl:7 boost:1 nip:1 trans:3 able:1 beyond:2 mcfarland:2 usually:1 pattern:15 below:2 program:1 max:2 reliable:2 power:3 event:1 misclassification:1 braincomputer:1 quek:1 disturbance:1 regularized:1 sian:1 blankertz:5 improve:1 technology:1 eye:1 concludes:1 extract:3 text:1 review:2 literature:1 understanding:1 asymptotic:2 relative:3 synchronization:1 neurophysiologically:1 interesting:1 filtering:4 validation:2 foundation:2 sufficient:1 row:1 course:1 repeat:1 placed:1 supported:1 kanamori:1 dis:1 drastically:2 offline:1 institute:2 basu:2 fg:2 feedback:2 dimension:2 stand:1 computes:1 forward:1 made:1 adaptive:1 preprocessing:1 coincide:2 projected:1 transaction:2 ignore:1 netw:1 kullback:6 ml:2 overfitting:1 conclude:1 spatio:1 discriminative:4 xi:3 search:1 channel:1 learn:1 robust:17 eeg:16 heidelberg:1 excellent:1 bartz:1 protocol:1 diag:2 ramoser:1 icann:1 noise:3 fair:1 grosse:1 wiley:1 bmbf:1 pfurtscheller:2 tomioka:2 neuroimage:4 explicit:1 plataniotis:1 comput:2 candidate:1 lie:1 guan:3 third:1 theorem:11 rk:4 showing:1 desynchronization:1 dominates:1 exists:1 intrinsic:1 curio:1 adding:1 idier:1 importance:4 supplement:5 magnitude:1 dissimilarity:1 conditioned:1 occurring:1 cursor:1 rejection:1 suited:1 artifactual:8 mill:1 neurophysiological:2 u2:2 applies:2 springer:2 corresponds:1 wolf:1 determines:1 relies:1 extracted:2 ma:1 harris:1 sorted:2 formulated:1 identity:1 towards:1 adverse:1 change:4 hard:1 analysing:2 averaging:3 principal:1 discriminate:1 pas:2 experimental:2 invariance:1 indicating:1 select:2 eguchi:2 support:1 seoul:1 latter:1 rotate:2 evaluate:1 dickhaus:2 regularizing:1 wileyinterscience:1 |
4,335 | 4,923 | B IG & Q UIC: Sparse Inverse Covariance Estimation
for a Million Variables
Cho-Jui Hsieh, M?aty?as A. Sustik, Inderjit S. Dhillon, Pradeep Ravikumar
Department of Computer Science
University of Texas at Austin
{cjhsieh,sustik,inderjit,pradeepr}@cs.utexas.edu
Russell A. Poldrack
Department of Psychology and Neurobiology
University of Texas at Austin
[email protected]
Abstract
The `1 -regularized Gaussian maximum likelihood estimator (MLE) has been
shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix even under high-dimensional settings. However, it requires solving
a difficult non-smooth log-determinant program with number of parameters scaling quadratically with the number of Gaussian variables. State-of-the-art methods
thus do not scale to problems with more than 20, 000 variables. In this paper,
we develop an algorithm B IG QUIC, which can solve 1 million dimensional `1 regularized Gaussian MLE problems (which would thus have 1000 billion parameters) using a single machine, with bounded memory. In order to do so, we
carefully exploit the underlying structure of the problem. Our innovations include
a novel block-coordinate descent method with the blocks chosen via a clustering
scheme to minimize repeated computations; and allowing for inexact computation
of specific components. In spite of these modifications, we are able to theoretically analyze our procedure and show that B IG QUIC can achieve super-linear or
even quadratic convergence rates.
1
Introduction
Let {y1 , y2 , . . . , yn } be n samples drawn from a p-dimensional Gaussian distribution N (?, ?), also
known as a Gaussian Markov Random Field (GMRF). An important problem is that of recovering
the covariance matrix (or its inverse) of this distribution, given the n samples, in a high-dimensional
regime where n p. A popular approach involves leveraging the structure of sparsity in the inverse
covariance matrix, and solving the following `1 -regularized maximum likelihood problem:
arg min{? log det ? + tr(S?) + ?k?k1 } = arg min f (?),
(1)
?0
?0
Pn
Pn
where S = n1 i=1 (yi ? ?
?)(yi ? ?
?)T is the sample covariance matrix and ?
? = n1 i=1 yi is
the sample mean. While the non-smooth log-determinant program in (1) is usually considered a
difficult optimization problem to solve, due in part to its importance, there has been a long line
of recent work on algorithms to solve (1): see [7, 6, 3, 16, 17, 18, 15, 11] and references therein.
The state-of-the-art seems to be a second order method QUIC [9] that has been shown to achieve
super-linear convergence rates. Complementary techniques such as exact covariance thresholding
[13, 19], and the divide and conquer approach of [8], have also been proposed to speed up the
solvers. However, as noted in [8], the above methods do not scale to problems with more than 20, 000
variables, and typically require several hours even for smaller dimensional problems involving ten
thousand variables. There has been some interest in statistical estimators other than (1) that are
more amenable to optimization: including solving node-wise Lasso regression problems [14] and
the separable linear program based CLIME estimator [2]. However the caveat with these estimators
is that they are not guaranteed to yield a positive-definite covariance matrix, and typically yield less
accurate parameters.
What if we want to solve the M -estimator in (1) with a million variables? Note that the number of
parameters in (1) is quadratic in the number of variables, so that for a million variables, we would
1
have a trillion parameters. There has been considerable recent interest in such ?Big Data? problems
involving large-scale optimization: these however are either targeted to ?big-n? problems with a lot
of samples, unlike the constraint of ?big-p? with a large number of variables in our problem, or are
based on large-scale distributed and parallel frameworks, which require a cluster of processors, as
well as software infrastructure to run the programs over such clusters. At least one caveat with such
large-scale distributed frameworks is that they would be less amenable to exploratory data analysis
by ?lay users? of such GMRFs. Here we ask the following ambitious but simple question: can we
solve the M -estimator in (1) with a million variables using a single machine with bounded memory?
This might not seem like a viable task at all in general, but note that the optimization problem in (1)
arises from a very structured statistical estimation problem: can we leverage the underlying structure
to be able to solve such an ultra-large-scale problem?
In this paper, we propose a new solver, B IG QUIC, to solve the `1 -regularized Gaussian MLE problem with extremely high dimensional data. Our method can solve one million dimensional problems
with 1000 billion variables using a single machine with 32 cores and 32G memory. Our proposed
method is based on the state-of-the-art framework of QUIC [9, 8]. The key bottleneck with QUIC
stems from the memory required to store the gradient W = X ?1 of the iterates X, which is a dense
p ? p matrix, and the computation of the log-determinant function of a p ? p matrix. A starting point
to reduce the memory footprint is to use sparse representations for the iterates X and compute the
elements of the empirical covariance matrix S on demand from the sample data points. In addition
we also have to avoid the storage of the dense matrix X ?1 and perform intermediate computations involving functions of such dense matrices on demand. These naive approaches to reduce the
memory however would considerably increase the computational complexity, among other caveats,
which would make the algorithm highly impractical.
To address this, we present three key innovations. Our first is to carry out the coordinate descent
computations in a blockwise manner, and by selecting the blocks very carefully using an automated
clustering scheme, we not only leverage sparsity of the iterates, but help cache computations suitably. Secondly, we reduce the computation of the log-determinant function to linear equation solving
using the Schur decomposition that also exploits the symmetry of the matrices in question. Lastly,
since the Hessian computation is a key bottleneck in the second-order method, we compute it inexactly. We show that even with these modifications and inexact computations, we can still guarantee
not only convergence of our overall procedure, but can easily control the degree of approximation
of Hessian to achieve super-linear or even quadratic convergence rates. Inspite of our low-memory
footprint, these innovations allow us to beat the state of the art DC-QUIC algorithm (which has
no memory limits) in computational complexity even on medium-size problems of a few thousand
variables. Finally, we show how to parallelize our method in a multicore shared memory system.
The paper is organized as follows. In Section 2, we briefly review the QUIC algorithm and outline
the difficulties of scaling QUIC to million dimensional data. Our algorithm is proposed in Section 3.
We theoretically analyze our algorithm in Section 4, and present experimental results in Section 5.
2
Difficulties in scaling QUIC to million dimensional data
Our proposed algorithm is based on the framework of QUIC [9]; which is a state of the art procedure
for solving (1), based on a second-order optimization method. We present a brief review of the
algorithm, and then explain the key bottlenecks that arise when scaling it to million dimensions.
Since the objective function of (1) is non-smooth, we can separate the smooth and non-smooth part
by f (X) = g(X) + h(X), where g(X) = ? log det X + tr(SX) and h(X) = ?kXk1 .
QUIC is a second-order method that iteratively solves for a generalized Newton direction using
coordinate descent; and then descends using this generalized Newton direction and line-search. To
leverage the sparsity of the solution, the variables are partitioned into Sf ixed and Sf ree sets:
Xij ? Sf ixed if |?ij g(X)| ? ?ij , and Xij = 0, Xij ? Sf ree otherwise.
(2)
Only the free set Sf ree is updated at each Newton iteration, reducing the number of variables to be
updated to m = |Sf ree |, which is comparable to kX ? k0 , the sparsity of the solution.
Difficulty in Approximating the Newton Direction. Let us first consider the generalized Newton
direction for (1):
D = arg min{?
g (D) + h(X + D)},
(3)
t
D
Xt
t
1
where
g?Xt (D) = g(Xt ) + tr(?g(Xt )T D) + vec(D)T ?2 g(Xt ) vec(D).
(4)
2
?1
?1
?1
2
In our problem ?g(Xt ) = S ? Xt and ? g(X) = Xt ? Xt , where ? denotes the Kronecker
product of two matrices. When Xt is sparse, the Newton direction computation (3) can be solved
2
efficiently by coordinate descent [9]. The obvious implementation calls for the computation and
storage of Wt = Xt?1 ; using this to compute a = Wij2 + Wii Wjj , b = Sij ? Wij + wiT Dwj , and
c = Xij + Dij . Armed with these quantities, the coordinate descent update for variable Dij takes
the form:
Dij ? Dij ? c + S(c ? b/a, ?ij /a),
(5)
where S(z, r) = sign(z) max{|z| ? r, 0} is the soft-thresholding function.
The key computational bottleneck here is in computing the terms wiT Dwj , which take O(p2 )
time when implemented naively. To address this, [9] proposed to store and maintain U = DW ,
which reduced the cost to O(p) flops per update. However, this is not a strategy we can use when
dealing with very large data sets: storing the p by p dense matrices U and W in memory would
be prohibitive. The straightforward approach is to compute (and recompute when necessary) the
elements of W on demand, resulting in O(p2 ) time complexity.
Our key innovation to address this is a novel block coordinate descent scheme, detailed in Section
3.1, that also uses clustering to strike a balance between memory use and computational cost while
exploiting sparsity. The result is a procedure with comparable wall-time to that of QUIC on midsized problems and can scale up to very large problem instances that the original QUIC could not.
Difficulty in the Line Search Procedure. After finding the generalized Newton direction Dt ,
QUIC then descends using this direction after a line-search via Armijo?s rule. Specifically, it selects
the largest step size ? ? {? 0 , ? 1 , . . . } such that X + ?Dt is (a) positive definite, and (b) satisfies
the following sufficient decrease condition:
f (X + ?D? ) ? f (X) + ???, ? = tr(?g(X)T D? ) + kX + D? k1 ? kXk1 .
(6)
The key computational bottleneck is checking positive definiteness (typically by computing the
smallest eigenvalue), and the computation of the determinant of a sparse matrix with dimension
that can reach a million. As we show in Appendix 6.4, the time and space complexity of classical
sparse Cholesky decomposition generally grows quadratically to dimensionality even when fixing
the number of nonzero elements in the matrix, so it is nontrivial to address this problem. Our key
innovation, detailed in Section 3.2, is an efficient procedure that checks both conditions (a) and (b)
above using Schur complements and sparse linear equation solving. The computation only uses
memory proportional to the number of nonzeros in the iterate.
Many other difficulties arise when dealing with large sparse matrices in the sparse inverse covairance
problem. We present some of them in Appendix 6.5.
3
Our proposed algorithm
In this section, we describe our proposed algorithm, B IG QUIC, with the key innovations mentioned
in the previous section. We assume that the iterates Xt have m nonzero elements, and that each
iterate is stored in memory using a sparse format. We denote the size of the free set by s and observe
that it is usually very small and just a constant factor larger than m? , the number of nonzeros in the
final solution [9]. Also, the sample covariance matrix is stored in its factor form S = Y Y T , where
Y is the normalized sample matrix. We now discuss a crucial element of B IG QUIC, our novel block
coordinate descent scheme for solving each subproblem (3).
3.1
Block Coordinate Descent method
The most expensive step during the coordinate descent update for Dij is the computation of
wiT Dwj , where wi is the i-th column of W = X ?1 ; see (5). It is not possible to compute W = X ?1
with Cholesky factorization as was done in [9], nor can it be stored in memory. Note that wi is the
solution of the linear system Xwi = ei . We thus use the conjugate gradient method (CG) to compute wi , leveraging the fact that X is a positive definite matrix. This solver requires only matrix
vector products, which can be efficiently implemented for the sparse matrix X. CG has time complexity O(mT ), where T is the number of iterations required to achieve the desired accuracy.
Vanilla Coordinate Descent. A single step of coordinate descent requires the solution of two
linear systems Xwi = ei and Xwj = ej which yield the vectors wi , wj , and we can then compute
wiT Dwj . The time complexity for each update would require O(mT +s) operations, and the overall
complexity will be O(msT +s2 ) for one full sweep through the entire matrix. Even when the matrix
is sparse, the quadratic dependence on nonzero elements is expensive.
Our Approach: Block Coordinate Descent with memory cache scheme. In the following we
present a block coordinate descent scheme that can accelerate the update procedure by storing and
3
reusing more results of the intermediate computations. The resulting increased memory use and
speedup is controlled by the number of blocks employed, that we denote by k.
Assume that only some columns of W are stored in memory. In order to update Dij , we need both
wi and wj ; if either one is not directly available, we have to recompute it by CG and we call this
a ?cache miss?. A good update sequence can minimize the cache miss rate. While it is hard to find
the optimal sequence in general, we successfully applied a block by block update sequence with a
careful clustering scheme, where the number of cache misses is sufficiently small.
Assume we pick k such that we can store p/k columns of W (p2 /k elements) in memory. Suppose
we are given a partition of N = {1, . . . , p} into k blocks, S1 , . . . , Sk . We divide matrix D into
k ? k blocks accordingly. Within each block we run Tinner sweeps over variables within that block,
and in the outer iteration we sweep through all the blocks Touter times. We use the notation WSq to
denote a p by |Sq | matrix containing columns of W that corresponds to the subset Sq .
Coordinate descent within a block. To update the variables in the block (Sz , Sq ) of D, we first
compute WSz and WSq by CG and store it in memory, meaning that there is no cache miss during
the within-block coordinate updates. With Usq = DWSq maintained, the update for Dij can be
computed by wiT uj when i ? Sz and j ? Sq . After updating each Dij to Dij + ?, we can maintain
USq by
Uit ? Uit + ?Wjt , Ujt ? Ujt + ?Wit , ?t ? Sq .
The above coordinate update computations cost only O(p/k) operations because we only update a
subset of the columns. Observe that Urt never changes when r ?
/ {Sz ? Sq }.
Therefore, we can use the following arrangement to further reduce the time complexity. Before
running coordinate descent for the block we compute and store Pij = (wi )TSz?q? (uj )Sz?q? for all (i, j)
in the free set of the current block, where Sz?q? = {i | i ?
/ Sz and i ?
/ Sq }. The term wiT uj for
T
T
T
updating Dij can then be computed by wi uj = Pij + wSz uSz + wSq uSq . With this trick, each
coordinate descent step within the block only takes O(p/k) time, and we only need to store USz ,Sq ,
which only requires O(p2 /k 2 ) memory. Computing Pij takes O(p) time for each i, j, so if we
update each coordinate Tinner times within a block, the time complexity is O(p + Tinner p/k) and the
amortized cost per coordinate update is only O(p/Tinner + p/k). This time complexity suggests that
we should run more iterations within each block.
Sweeping through all the blocks. To go through all the blocks, each time we select a z ?
{1, . . . , k} and updates blocks (Sz , S1 ), . . . , (Sz , Sk ). Since all of them share {wi | i ? Sz }, we
first compute them and store in memory. When updating an off-diagonal block (Sz , Sq ), if the free
sets are dense, we need to compute and store {wi | i ? Sq }. So totally each block of W will be
computed k times. The total time complexity becomes O(kpmT ), where m is number of nonzeros
in X and T is number of conjugate gradient iterations. Assume the nonzeros in X is close to the
size of free set (m ? s), then each coordinate update costs O(kpT ) flops.
Selecting the blocks using clustering. We now show that a careful selection of the blocks using
a clustering scheme can lead to dramatic speedup for block coordinate descent. When updating
variables in the block (Sz , Sq ), we would need the column wj only if some variable in {Dij | i ?
Sz } lies in the free set. Leveraging this key observation, given two partitions Sz and Sq , we define
the set of boundary nodes as: B(Sz , Sq ) ? {j | j ? Sq and ?i ? Sz s.t. Fij = 1}, where the matrix
F is an indicator of the free set.
P
The number of columns to be computed in one sweep is then given by p + z6=q |B(Sz , Sq )|.
P
Therefore, we would like to find a partition {S1 , . . . , Sk } for which z6=q |B(Sz , Sq )| is minimal. It appears to be hard to find the partitioning that minimizes the number of boundary nodes.
However, we P
note that the number in question is bounded by the number of cross cluster edges:
B(Sz , Sq ) < i?Sz ,j?Sq Fij . This suggests the use of graph clustering algorithms, such as METIS
[10] or Graclus [5] which minimize the right hand side. Assuming that the ratio of between-cluster
edges to the number of total edges is r, we observe a reduced time complexity of O((p+rm)T ) when
computing elements of W , and r is very small in real datasets. In real datasets, when we converge
to very sparse solutions, more than 95% of edges are in the diagonal blocks. In case of the fMRI
dataset with p = 228483, we used 20 blocks, and the total number of boundary nodes were only
|B| = 8697. Compared to block coordinate descent with random partition, which generally needs
to compute 228483 ? 20 columns, the clustering resulted in the computation of 228483 + 8697
columns, thus achieved an almost 20 times speedup. In Appendix 6.6 we also discuss additional
benefits of the graph clustering algorithm that results in accelerated convergence.
4
3.2
Line Search
The line search step requires an efficient and scalable procedure that computes log det(A) and
checks the positive definiteness of a sparse matrix A. We present a procedure that has complexity of at most O(mpT ) where T is the number of iterations used by the sparse linear solver. We
note that computing log det(A) for a large sparse matrix A for which we only have a matrix-vector
multiplication subroutine available is an interesting subproblem on its own and we expect that numerous other applications may benefit from the approach presented below. The following lemma
can be proved by induction on p:
a bT
Lemma 1. If A =
is a partitioning of an arbitrary p ? p matrix, where a is a scalar
b C,
and b is a p ? 1 dimensional vector then det(A) = det(C)(a ? bT C ?1 b). Moreover, A is positive
definite if and only if C is positive definite and (a ? bT C ?1 b) > 0.
The above lemma allows us to compute the determinant by reducing it to solving linear systems;
and also allows us to check positive-definiteness. Applying Lemma 1 recursively, we get
p
X
log det A =
log(Aii ? AT(i+1):p,i A?1
(7)
(i+1):p,(i+1):p A(i+1):p,i ),
i=1
where each Ai1 :i2 ,j1 :j2 denotes a submatrix of A with row indexes i1 , . . . , i2 and column indexes
j1 , . . . , j2 . Each A?1
(i+1):p,(i+1):p A(i+1):p,i in the above formula can be computed as the solution of
a linear system and hence we can avoid the storage of the (dense) inverse matrix. By Lemma 1, we
can check the positive definiteness by verifying that all the terms in (7) are positive definite. Notice
that we have to compute (7) in a reverse order (i = p, . . . , 1) to avoid the case that A(i+1):p,(i+1):p
is non positive definite.
3.3
Summary of the algorithm
In this section we present B IG QUIC as Algorithm 1. The detailed time complexity analysis are
presented in Appendix 6.7. In summary, the time needed to compute the columns of W in block
coordinate descent, O((p + |B|)mT Touter ), dominates the time complexity, which underscores the
importance of minimizing the number of boundary nodes |B| via our clustering scheme.
Algorithm 1: B IG QUIC algorithm
Input : Samples Y , regularization parameter ?, initial iterate X0
Output: Sequence {Xt } that converges to X ? .
1 for t = 0, 1, . . . do
2
Compute Wt = Xt?1 column by column, partition the variables into free and fixed sets.
3
Run graph clustering algorithm based on absolute values on free set.
4
for sweep = 1, . . . , Touter do
5
for s = 1, . . . , k do
6
Compute WSs by CG.
7
for q = 1, . . . , k do
8
Identify boundary nodes Bsq := B(Ss , Sq ) ? Sq (only need if s 6= q)
9
Compute WBsq for boundary nodes (only need if s 6= q).
10
Compute UBsq , and Pij for all (i, j) the current block.
11
Conduct coordinate updates.
12
Find the step size ? by the method proposed in Section 3.2.
Parallelization. While our method can run well on a single machine with a single core, here
we point out components of our algorithm that can be ?embarrassingly? parallelized on any single
machine with multiple cores (with shared memory). We first note that we can obtain a good starting
point for our algorithm by applying the divide-and-conquer framework proposed in [8]: this divides
the problem into k subproblems, which can then be independently solved in parallel. Consider the
steps of our Algorithm 1 in B IG QUIC. In step 2, instead of computing columns of W one by one,
we can compute t rows of W at a time, and parallelize these t jobs. A similar trick can be used in
step 6 and 9. In step 3, we use the multi-core version of METIS (ParMETIS) for graph clustering.
5
In step 8 and 10, the computations are naturally independent. In step 15, we compute each term
in (7) independently and abort if any of the processes report non-positive definiteness. The only
sequential part is the coordinate update in step 11, but note, (see Section 3.1), that we have reduced
the complexity of this step from O(p) in QUIC to O(p/k).
4
Convergence Analysis
In this section, we present two main theoretical results. First, we show that our algorithm converges
to the global optimum even with inaccurate Hessian computation. Second, we show that by a careful
control of the error in the Hessian computation, B IG QUIC can still achieve a quadratic rate of
convergence in terms of Newton iterations. Our analysis differs from that in QUIC [9], where the
computations are all assumed to be accurate. [11] also provides a convergence analysis for general
proximal Newton methods, but our algorithm with modifications such as fixed/free set selection
does not exactly fall into their framework; moreover our analysis shows a quadratic convergence
rate, while they only show a super-linear convergence rate.
In the B IG QUIC algorithm, we compute wi in two places. The first place is the gradient computation in the second term of (4), where ?g(X) = S ? W . The second place is in the third term
of (4), where ?2 g(X) = W ? W . At the first glance they are equivalent and can be computed
simultaneously, but it turns out that by carefully analysing the difference between two types of wi ,
we can achieve much faster convergence, as discussed below.
The key observation is that we only require the gradient Wij for all (i, j) ? Sf ree to conduct
coordinate descent updates. Since the free set is very sparse and can fit in memory, those Wij only
need to be computed once and stored in memory. On the other hand, the computation of wiT Dwj
corresponds to the Hessian computation, and we need two columns for each coordinate update,
which has to be computed repeatedly.
It is easy to produce an example where the algorithm converges to a wrong point when the gradient
computation is not accurate, as shown in Figure 5(b) (in Appendix 6.5). Luckily, based on the above
analysis the gradient only needs to be computed once per Newton iteration, so we can compute it
with high precision. On the other hand, wi for the Hessian has to be computed repeatedly, so we do
?t = W
? t ?W
? t to be
not want to spend too much time to compute each of them accurately. We define H
the approximated Hessian matrix, and derive the following theorem to show that even if Hessian is
inaccurate, B IG QUIC still converges to the global optimum. Notice that our proof covers B IG QUIC
algorithm with fixed/free set selection, and the only assumption is that subproblem (3) is solved
exactly for each Newton iteration; it is the future work to consider the case where subproblems are
solved approximately.
? t ?I for some constant
Theorem 1. For solving (1), if ?g(X) is computed exactly and ??I H
??, ? > 0 at every Newton iteration, then B IG QUIC converges to the global optimum.
The proof is in Appendix 6.1. Theorem 1 suggests that we do not need very accurate Hessian computation for convergence. To have super-linear convergence rate, we require the Hessian computation
to be more and more accurate as Xt approaches X ? . We first introduce the following notion of
minimum norm subgradient to measure the optimality of X:
?ij g(X) + sign(Xij )?ij
if Xij 6= 0,
S
gradij f (X) =
sign(?ij g(X)) max(|?ij g(X)| ? ?ij , 0) if Xij = 0.
The following theorem then shows that if we compute Hessian more and more accurately, B IG QUIC
will have a super-linear or even quadratic convergence rate.
Theorem 2. When applying B IG QUIC to solve (1), assume ?g(Xt ) is exactly computed and
?2 g(Xt ) is approximated by Ht , and the following condition holds:
?
@(i, j) such that Xij
= 0 and |?ij g(X ? )| = ?.
?
? 1+p
Then kXt+1 ? X k = O(kXt ? X k
(8)
) as k ? ? for 0 < p ? 1 if and only if
? t ? ?2 g(Xt )k = O(k gradS (Xt )kp ) as k ? ?.
kH
(9)
The proof is in Appendix 6.2. The assumption in (8) can be shown to be satisfied with very high
probability (and was also satisfied in our experiments). Theorem 2 suggests that we can achieve
super-linear, or even quadratic convergence rate by a careful control of the approximated Hessian
? t . In the B IG QUIC algorithm, we can further control kH
? t ??2 g(Xt )k by the residual of conjugate
H
6
(a) Comparison on chain graph.
(b) Comparison on random graph.
(c) Comparison on fmri data.
Figure 1: The comparison of scalability on three types of graph structures. In all the experiments, B IG QUIC
can solve larger problems than QUIC even with a single core, and using 32 cores B IG QUIC can solve million
dimensional data in one day.
? i ? ei for
gradient solvers to achieve desired convergence rate. Suppose the residual is bi = X w
each i = 1, . . . , p, and Bt = [b1 b2 . . . bp ] is a collection of the residuals at the t-th iteration. The
following theorem shows that we can control the convergence rate by controlling the norm of Bt .
Theorem 3. In the B IG QUIC algorithm, if the residual matrix kBt k = O(k gradS (Xt )kp ) for
some 0 < p ? 1 as t ? ?, then kXt+1 ? X ? k = O(kXt ? X ? k1+p ) as t ? ?.
The proof is in Appendix 6.3. Since gradS (Xt ) can be easily computed without additional cost, and
residuals B can be naturally controlled when running conjugate gradient, we can easily control the
asymptotic convergence rate in practice.
5
Experimental Results
In this section, we show that our proposed method B IG QUIC can scale to high-dimensional datasets
on both synthetic data and real data. All the experiments are run on a single computing node with 4
Intel Xeon E5-4650 2.7GHz CPUs, each with 8 cores and 32G memory.
Scalability of B IG QUIC on high-dimensional datasets. In the first set of experiments, we show
B IG QUIC can scale to extremely high dimensional datasets. We conduct experiments on the following synthetic and real datasets:
?1
(1) Chain graphs: the ground truth precision matrix is set to be ??1
i,i?1 = ?0.5 and ?i,i = 1.25.
(2) Graphs with random pattern: we use the procedure mentioned in Example 1 in [12] to generate
random pattern. When generating the graph, we assume there are 500 clusters, and 90% of the edges
are within clusters. We fix the average degree to be 10.
(3) FMRI data: The original dataset has dimensionality p = 228, 483 and n = 518. For scalability
experiments, we subsample various number of random variables from the whole dataset.
We use ? = 0.5 for chain and random Graph so that number of recovered edges is close to the ground
truth, and set number of samples n = 100. We use ? = 0.6 for the fMRI dataset, which recovers
a graph with average degree 20. We set the stopping condition to be gradS (Xt ) < 0.01kXt k1 . In
all of our experiments, number of nonzeros during the optimization phase do not exceed 5kX ? k0 in
intermediate steps, therefore we can always store the sparse representation of Xt in memory. For
B IG QUIC, we set blocks k to be the smallest number such that p/k columns of W can fit into
32G memory. For both QUIC and B IG QUIC, we apply the divide and conquer method proposed
in [8] with 10-clusters to get a better initial point. The results are shown in Figure 1. We can see
that B IG QUIC can solve one million dimensional chain graphs and random graphs in one day, and
handle the full fMRI dataset in about 5 hours.
More interestingly, even for dataset with size less than 30000, where p2 size matrices can fit in
memory, B IG QUIC is faster than QUIC by exploiting the sparsity. Figure 2 shows an example on
a sampled fMRI dataset with p = 20000, and we can see B IG QUIC outperforms QUIC even when
using a single core. Also, B IG QUIC is much faster than other solvers, including Glasso [7] and
ALM [17]. Figure 3 shows the speedup under a multicore shared memory machine. B IG QUIC can
achieve about 14 times speedup using 16 cores, and 20 times speedup when using 32 cores.
FMRI dataset. An extensive resting state fMRI dataset from a single individual was analyzed in order to test B IG QUIC on real-world data. The data (collected as part of the MyConnectome project:
http://www.myconnectome.org) comprised 36 resting fMRI sessions collected across different days using whole-brain multiband EPI acquisition, each lasting 10 minutes (TR=1.16 secs,
multiband factor=4, TE = 30 ms, voxel size = 2.4 mm isotropic, 68 slices, 518 time points). The
7
Figure 2: Comparison on fMRI data with p =
Figure 3: The speedup of B IG QUIC when using
20000 (the maximum dimension that state-of-theart softwares can handle).
multiple cores.
data were preprocessed using FSL 5.0.2, including motion correction, scrubbing of motion frames,
registration of EPI images to a common high-resolution structural image using boundary-based registration, and affine transformation to MNI space. The full brain mask included 228,483 voxels.
After motion scrubbing, the dataset included a total 18,435 time points across all sessions.
B IG QUIC was applied to the full dataset: for the first time, we can learn a GMRF over the entire
set of voxels, instead of over a smaller set of curated regions or supervoxels. Exploratory analyses
over a range of ? values suggested that ? = 0.5 offered a reasonable level of sparsity. The resulting graph was analyzed to determine whether it identified neuroscientifically plausible networks.
Degree was computed for each vertex; high-degree regions were primarily found in gray matter regions, suggesting that the method successfully identified plausible functional connections (see left
panel of Figure 4). The structure of the graph was further examined in order to determine whether
the method identified plausible network modules. Modularity-based clustering [1] was applied to
the graph, resulting in 60 modules that exceeded the threshold size of 100 vertices. A number of
neurobiologically plausible resting-state networks were identified, including ?default mode? and
sensorimotor networks (right panel of Figure 4). In addition, the method identified a number of
structured coherent noise sources (i.e. MRI artifacts) in the dataset. For both neurally plausible and
artifactual modules, the modules detected by B IG QUIC are similar to those identified using independent components analysis on the same dataset, without the need for the extensive dimensionality
reduction (without statistical guarantees) inherent in such techniques.
Figure 4: (Best viewed in color) Results from B IG QUIC analyses of resting-state fMRI data. Left panel: Map
of degree distribution across voxels, thresholded at degree=20. Regions showing high degree were generally
found in the gray matter (as expected for truly connected functional regions), with very few high-degree voxels
found in the white matter. Right panel: Left-hemisphere surface renderings of two network modules obtained
through graph clustering. Top panel shows a sensorimotor network, bottom panel shows medial prefrontal,
posterior cingulate, and lateral temporoparietal regions characteristic of the ?default mode? generally observed
during the resting state. Both of these are commonly observed in analyses of resting state fMRI data.
Acknowledgments
This research was supported by NSF grant CCF-1320746 and NSF grant CCF-1117055. C.-J.H.
also acknowledges the support of IBM PhD fellowship. P.R. acknowledges the support of ARO via
W911NF-12-1-0390 and NSF via IIS-1149803, DMS-1264033. R.P. acknowledges the support of
ONR via N000140710116 and the James S. McDonnell Foundation.
8
References
[1] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfolding of community
hierarchies in large networks. J. Stat Mech, 2008.
[2] T. Cai, W. Liu, and X. Luo. A constrained `1 minimization approach to sparse precision matrix
estimation. Journal of American Statistical Association, 106:594?607, 2011.
[3] A. d?Aspremont, O. Banerjee, and L. E. Ghaoui. First-order methods for sparse covariance
selection. SIAM Journal on Matrix Analysis and its Applications, 30(1):56?66, 2008.
[4] R. S. Dembo, S. C. Eisenstat, and T. Steihaug. Inexact Newton methods. SIAM J. Numerical
Anal., 19(2):400?408, 1982.
[5] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors: A multilevel approach. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),
29:11:1944?1957, 2007.
[6] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse Gaussians. UAI, 2008.
[7] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, July 2008.
[8] C.-J. Hsieh, I. S. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for
sparse inverse covariance estimation. In NIPS, 2012.
[9] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance estimation
using quadratic approximation. 2013.
[10] G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular
graphs. SIAM J. Sci. Comput., 20(1):359?392, 1999.
[11] J. D. Lee, Y. Sun, and M. A. Saunders. Proximal newton-type methods for minimizing composite functions. In NIPS, 2012.
[12] L. Li and K.-C. Toh. An inexact interior point method for `1 -reguarlized sparse covariance
selection. Mathematical Programming Computation, 2:291?315, 2010.
[13] R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for
large-scale graphical lasso. Journal of Machine Learning Research, 13:723?736, 2012.
[14] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the
lasso. Annals of Statistics, 34:1436?1462, 2006.
[15] P. Olsen, F. Oztoprak, J. Nocedal, and S. Rennie. Newton-like methods for sparse inverse
covariance estimation. Technical report, Optimization Center, Northwestern University, 2012.
[16] B. Rolfs, B. Rajaratnam, D. Guillot, A. Maleki, and I. Wong. Iterative thresholding algorithm
for sparse inverse covariance estimation. In NIPS, 2012.
[17] K. Scheinberg, S. Ma, and D. Goldfarb. Sparse inverse covariance selection via alternating
linearization methods. NIPS, 2010.
[18] K. Scheinberg and I. Rish. Learning sparse Gaussian Markov networks using a greedy coordinate ascent approach. In J. Balczar, F. Bonchi, A. Gionis, and M. Sebag, editors, Machine
Learning and Knowledge Discovery in Databases, volume 6323 of Lecture Notes in Computer
Science, pages 196?212. Springer Berlin / Heidelberg, 2010.
[19] D. M. Witten, J. H. Friedman, and N. Simon. New insights and faster computations for the
graphical lasso. Journal of Computational and Graphical Statistics, 20(4):892?900, 2011.
9
| 4923 |@word kulis:1 determinant:6 version:1 briefly:1 mri:1 seems:1 norm:2 cingulate:1 suitably:1 covariance:18 hsieh:3 decomposition:2 pick:1 dramatic:1 tr:5 recursively:1 carry:1 reduction:1 initial:2 liu:1 selecting:2 interestingly:1 outperforms:1 current:2 recovered:1 rish:1 luo:1 toh:1 mst:1 numerical:1 partition:5 j1:2 update:21 medial:1 intelligence:1 prohibitive:1 greedy:1 accordingly:1 isotropic:1 dembo:1 core:11 caveat:3 infrastructure:1 iterates:4 node:8 recompute:2 provides:1 org:1 mathematical:1 viable:1 bonchi:1 introduce:1 manner:1 alm:1 blondel:1 x0:1 theoretically:2 mask:1 expected:1 nor:1 multi:1 brain:2 cpu:1 armed:1 cache:6 solver:6 totally:1 becomes:1 project:1 bounded:3 underlying:2 notation:1 moreover:2 panel:6 medium:1 what:1 biostatistics:1 minimizes:1 finding:1 transformation:1 impractical:1 guarantee:3 every:1 wsq:3 exactly:4 rm:1 wrong:1 control:6 partitioning:3 grant:2 yn:1 positive:12 before:1 limit:1 parallelize:2 ree:5 approximately:1 might:1 therein:1 examined:1 meinshausen:1 suggests:4 factorization:1 bi:1 range:1 karypis:1 acknowledgment:1 practice:1 block:39 definite:7 differs:1 footprint:2 procedure:10 sq:20 mech:1 kpt:1 empirical:1 composite:1 jui:1 spite:1 get:2 fsl:1 close:2 selection:7 interior:1 storage:3 applying:3 wong:1 www:1 equivalent:1 map:1 center:1 straightforward:1 go:1 starting:2 independently:2 resolution:1 wit:8 gmrf:2 eisenstat:1 estimator:6 rule:1 insight:1 dw:1 handle:2 exploratory:2 coordinate:29 notion:1 updated:2 annals:1 controlling:1 suppose:2 hierarchy:1 user:1 exact:2 programming:1 us:2 trick:2 element:8 amortized:1 expensive:2 approximated:3 updating:4 neurobiologically:1 lay:1 curated:1 cut:1 database:1 kxk1:2 bottom:1 subproblem:3 module:5 observed:2 solved:4 verifying:1 thousand:2 wj:3 region:6 pradeepr:1 connected:2 sun:1 russell:1 decrease:1 mentioned:2 complexity:16 wjj:1 solving:9 easily:3 accelerate:1 aii:1 k0:2 various:1 epi:2 fast:2 describe:1 kp:2 detected:1 saunders:1 ixed:2 guillot:1 larger:2 solve:12 spend:1 plausible:5 s:1 otherwise:1 rennie:1 statistic:2 uic:1 final:1 sequence:4 eigenvalue:1 kxt:5 tpami:1 cai:1 propose:1 aro:1 product:2 j2:2 achieve:9 kh:2 scalability:3 billion:2 convergence:18 cluster:7 exploiting:2 optimum:3 produce:1 generating:1 converges:5 help:1 derive:1 develop:1 stat:1 fixing:1 ij:9 multicore:2 job:1 p2:5 solves:1 recovering:2 c:1 involves:1 descends:2 implemented:2 strong:1 direction:7 fij:2 kbt:1 luckily:1 require:5 multilevel:2 fix:1 wall:1 ultra:1 secondly:1 correction:1 hold:1 mm:1 sufficiently:1 considered:1 ground:2 touter:3 smallest:2 estimation:8 uhlmann:1 utexas:2 largest:1 successfully:2 weighted:1 unfolding:1 minimization:1 gaussian:7 always:1 super:7 pn:2 avoid:3 ej:1 likelihood:2 check:4 underscore:1 cg:5 stopping:1 inaccurate:2 typically:3 entire:2 bt:5 w:1 koller:1 wij:3 subroutine:1 selects:1 i1:1 arg:3 among:1 overall:2 art:5 constrained:1 field:1 once:2 never:1 graclus:1 theart:1 fmri:12 future:1 report:2 inherent:1 few:2 primarily:1 simultaneously:1 resulted:1 individual:1 phase:1 n1:2 maintain:2 friedman:2 interest:2 highly:1 ai1:1 analyzed:2 truly:1 pradeep:1 chain:4 amenable:2 accurate:5 edge:6 necessary:1 conduct:3 divide:6 desired:2 theoretical:1 minimal:1 instance:1 xeon:1 column:16 soft:1 increased:1 cover:1 w911nf:1 cost:6 vertex:2 subset:2 comprised:1 dij:11 too:1 stored:5 proximal:2 considerably:1 cho:1 synthetic:2 siam:3 lee:1 off:1 satisfied:2 containing:1 prefrontal:1 american:1 reusing:1 li:1 suggesting:1 tsz:1 b2:1 sec:1 matter:3 gionis:1 lot:1 analyze:2 parallel:2 simon:1 clime:1 cjhsieh:1 minimize:3 accuracy:1 characteristic:1 efficiently:2 yield:3 identify:1 multiband:2 steihaug:1 accurately:2 processor:1 explain:1 lambiotte:1 reach:1 inexact:4 acquisition:1 sensorimotor:2 james:1 obvious:1 dm:1 naturally:2 proof:4 recovers:1 sampled:1 dataset:13 proved:1 popular:1 ask:1 color:1 knowledge:1 dimensionality:3 organized:1 embarrassingly:1 carefully:3 appears:1 exceeded:1 dt:2 day:3 done:1 just:1 lastly:1 hand:3 mpt:1 ei:3 banerjee:2 glance:1 abort:1 mode:2 artifact:1 gray:2 quality:1 grows:1 normalized:1 y2:1 ccf:2 maleki:1 hence:1 regularization:1 alternating:1 dhillon:4 iteratively:1 nonzero:3 i2:2 goldfarb:1 white:1 during:4 maintained:1 noted:1 m:1 generalized:4 outline:1 duchi:1 motion:3 meaning:1 wise:1 image:2 novel:3 common:1 witten:1 functional:2 mt:3 poldrack:2 volume:1 million:12 discussed:1 association:1 resting:6 vec:2 vanilla:1 session:2 surface:1 posterior:1 own:1 recent:2 hemisphere:1 reverse:1 store:9 onr:1 yi:3 minimum:1 additional:2 employed:1 parallelized:1 converge:1 determine:2 strike:1 july:1 ii:1 full:4 multiple:2 neurally:1 nonzeros:5 stem:1 smooth:5 technical:1 faster:4 cross:1 long:1 ravikumar:3 mle:3 controlled:2 involving:3 regression:1 scalable:1 aty:1 iteration:11 achieved:1 irregular:1 addition:2 want:2 fellowship:1 source:1 wij2:1 crucial:1 parallelization:1 unlike:1 ascent:1 leveraging:3 seem:1 schur:2 call:2 structural:1 leverage:3 intermediate:3 exceed:1 easy:1 automated:1 iterate:3 rendering:1 fit:3 psychology:1 hastie:2 lasso:5 identified:6 reduce:4 texas:2 det:7 grad:4 bottleneck:5 whether:2 rajaratnam:1 hessian:12 repeatedly:2 generally:4 detailed:3 wsz:2 eigenvectors:1 ten:1 reduced:3 generate:1 http:1 xij:8 nsf:3 notice:2 sign:3 per:3 tibshirani:1 key:11 threshold:1 drawn:1 reguarlized:1 preprocessed:1 registration:2 ht:1 thresholded:1 nocedal:1 graph:21 subgradient:2 run:6 inverse:12 place:3 almost:1 reasonable:1 appendix:8 scaling:4 comparable:2 submatrix:1 guaranteed:1 quadratic:9 mni:1 nontrivial:1 constraint:1 kronecker:1 bp:1 software:2 speed:1 min:3 extremely:2 optimality:1 kumar:1 separable:1 format:1 gould:1 speedup:7 department:2 structured:2 metis:2 mcdonnell:1 supervoxels:1 conjugate:4 smaller:2 across:3 lefebvre:1 partitioned:1 wi:12 modification:3 s1:3 lasting:1 sij:1 ghaoui:1 equation:2 xwi:2 scheinberg:2 discus:2 turn:1 needed:1 sustik:3 available:2 wii:1 operation:2 gaussians:1 apply:1 observe:3 original:2 denotes:2 clustering:14 include:1 running:2 top:1 graphical:4 newton:15 exploit:2 k1:4 uj:4 conquer:4 approximating:1 classical:1 sweep:5 objective:1 question:3 quantity:1 arrangement:1 quic:52 strategy:1 dependence:1 diagonal:2 gradient:9 separate:1 lateral:1 sci:1 berlin:1 outer:1 mail:1 dwj:5 collected:2 induction:1 assuming:1 index:2 ratio:1 balance:1 minimizing:2 innovation:6 difficult:2 blockwise:1 subproblems:2 implementation:1 anal:1 ambitious:1 perform:1 allowing:1 observation:2 markov:2 datasets:6 descent:20 beat:1 flop:2 neurobiology:1 y1:1 dc:1 frame:1 sweeping:1 arbitrary:1 community:1 neuroscientifically:1 complement:1 required:2 extensive:2 connection:1 coherent:1 quadratically:2 hour:2 nip:4 address:4 able:2 suggested:1 usually:2 below:2 pattern:3 ujt:2 regime:1 sparsity:7 rolf:1 program:4 including:4 memory:29 max:2 difficulty:5 regularized:4 indicator:1 residual:5 scheme:10 brief:1 numerous:1 acknowledges:3 aspremont:1 naive:1 gmrfs:1 review:2 voxels:4 discovery:1 checking:1 multiplication:1 asymptotic:1 glasso:1 expect:1 lecture:1 northwestern:1 interesting:1 proportional:1 foundation:1 degree:9 affine:1 sufficient:1 pij:4 offered:1 thresholding:4 editor:1 storing:2 share:1 ibm:1 austin:2 row:2 summary:2 supported:1 free:12 side:1 allow:1 fall:1 absolute:1 sparse:29 distributed:2 benefit:2 boundary:7 dimension:3 ghz:1 uit:2 world:1 slice:1 computes:1 default:2 collection:1 commonly:1 projected:1 ig:35 voxel:1 transaction:1 olsen:1 dealing:2 sz:19 global:3 uai:1 b1:1 usz:2 assumed:1 search:5 urt:1 iterative:1 sk:3 z6:2 modularity:1 learn:1 symmetry:1 mazumder:1 e5:1 heidelberg:1 dense:6 main:1 big:3 s2:1 arise:2 subsample:1 whole:2 noise:1 sebag:1 repeated:1 complementary:1 intel:1 definiteness:5 precision:3 sf:7 comput:1 lie:1 guan:1 third:1 formula:1 theorem:8 minute:1 specific:1 xt:24 showing:1 dominates:1 naively:1 sequential:1 importance:2 phd:1 te:1 linearization:1 demand:3 sx:1 kx:3 artifactual:1 oztoprak:1 inderjit:2 scalar:1 springer:1 corresponds:2 truth:2 inexactly:1 satisfies:1 trillion:1 ma:1 viewed:1 targeted:1 careful:4 wjt:1 shared:3 considerable:1 hard:2 change:1 analysing:1 specifically:1 included:2 reducing:2 wt:2 miss:4 lemma:5 total:4 experimental:2 bsq:1 select:1 guillaume:1 cholesky:2 support:3 arises:1 armijo:1 accelerated:1 |
4,336 | 4,924 | Speeding up Permutation Testing in Neuroimaging ?
Chris Hinrichs? Vamsi K. Ithapu? Qinyuan Sun? Sterling C. Johnson?? Vikas Singh?
?
William S. Middleton Memorial VA Hospital
?
University of Wisconsin?Madison
{hinrichs,vamsi}@cs.wisc.edu {qsun28}@wisc.edu
{scj}@medicine.wisc.edu {vsingh}@biostat.wisc.edu
http://pages.cs.wisc.edu/?vamsi/pt_fast
Abstract
Multiple hypothesis testing is a significant problem in nearly all neuroimaging
studies. In order to correct for this phenomena, we require a reliable estimate
of the Family-Wise Error Rate (FWER). The well known Bonferroni correction
method, while simple to implement, is quite conservative, and can substantially
under-power a study because it ignores dependencies between test statistics. Permutation testing, on the other hand, is an exact, non-parametric method of estimating the FWER for a given ?-threshold, but for acceptably low thresholds
the computational burden can be prohibitive. In this paper, we show that permutation testing in fact amounts to populating the columns of a very large matrix
P. By analyzing the spectrum of this matrix, under certain conditions, we see
that P has a low-rank plus a low-variance residual decomposition which makes
it suitable for highly sub?sampled ? on the order of 0.5% ? matrix completion methods. Based on this observation, we propose a novel permutation testing
methodology which offers a large speedup, without sacrificing the fidelity of the
estimated FWER. Our evaluations on four different neuroimaging datasets show
that a computational speedup factor of roughly 50? can be achieved while recovering the FWER distribution up to very high accuracy. Further, we show that the
estimated ?-threshold is also recovered faithfully, and is stable.
1
Introduction
Suppose we have completed a placebo-controlled clinical trial of a promising new drug for a neurodegenerative disorder such as Alzheimer?s disease (AD) on a small sized cohort. The study is
designed such that in addition to assessing improvements in standard cognitive outcomes (e.g.,
MMSE), the purported treatment effects will also be assessed using Neuroimaging data. The rationale here is that, even if the drug does induce variations in cognitive symptoms, the brain changes
are observable much earlier in the imaging data. On the imaging front, this analysis checks for
statistically significant differences between brain images of subjects assigned to the two trial arms:
treatment and placebo. Alternatively, consider a second scenario where we have completed a neuroimaging research study of a particular controlled factor, such as genotype, and the interest is to
evaluate group-wise differences in the brain images: to identify which regions are affected as a
function of class membership. In either cases, the standard image processing workflow yields for
each subject a 3-D image (or voxel-wise ?map?). Depending on the image modality acquired, these
maps are of cerebral gray matter density, longitudinal deformation (local growth or contraction) or
metabolism. It is assumed that these maps have been ?co-registered? across different subjects so that
each voxel corresponds to approximately the same anatomical location. [1, 2].
?
Hinrichs and Ithapu are joint first authors and contributed equally to this work.
1
In order to localize the effect under investigation (i.e., treatment or genotype), we then have to
calculate a very large number (say, v) of univariate voxel-wise statistics ? typically up to several
million voxels. For example, consider group-contrast t-statistics (here we will mainly consider tstatistics, however other test statistics are also applicable, such as the F statistic used in ANOVA
testing, Pearson?s correlation as used in functional imaging studies, or the ?2 test of dependence
between variates, so long as certain conditions described in Section 2.3 are satisfied). In some voxels,
it may turn out that a group-level effect has been indicated, but it is not clear right away what its true
significance level should be, if any. As one might expect, given the number of hypotheses tests v,
multiple testing issues in this setting are quite severe, making it difficult to assess the true FamilyWise Type I Error Rate (FWER) [3]. If we were to address this issue via Bonferroni correction
[4], the enormous number of separate tests implies that certain weaker signals will almost certainly
never be detected, even if they are real. This directly affects studies of neurodegenerative disorders
in which atrophy proceeds at a very slow rate and the therapeutic effects of a drug is likely to be mild
to moderate anyway. This is a critical bottleneck which makes localizing real, albeit slight, shortterm treatment effects problematic. Already, this restriction will prevent us from using a smaller
sized study (fewer subjects), increasing the cost of pharmaceutical research. In the worst case, an
otherwise real treatment effect of a drug may not survive correction, and the trial may be deemed a
failure.
Bonferroni versus true FWER threshold. Observe that theoretically, there is a case in which the
Bonferroni corrected threshold is close to the true FWER threshold: when point-wise statistics are
i.i.d. If so, then the extremely low Bonferroni corrected ?-threshold crossings effectively become
mutually exclusive, which makes the Union Bound (on which Bonferroni correction is based) nearly
tight. However, when variables are highly dependent ? and indeed even without smoothing there are
many sources of strong non-Gaussian dependencies between voxels, the true FWER threshold can
be much more relaxed, and it is precisely this phenomenon which drives the search for alternatives to
Bonferroni correction. Thus, many methods have been developed to more accurately and efficiently
estimate or approximate the FWER [5, 6, 7, 8], which is a subject of much interest in statistics [9],
machine learning [10], bioinformatics [11], and neuroimaging [12].
Permutation testing. A commonly used method of directly and non-parametrically estimating the
FWER is Permutation testing [12, 13], which is a method of sampling from the Global (i.e., FamilyWise) Null distribution. Permutation testing ensures that any relevant dependencies present in the
data carry through to the test statistics, giving an unbiased estimator of the FWER. If we want to
choose a threshold sufficient to exclude all spurious results with probability 1 ? ?, we can construct
a histogram of sample maxima taken from permutation samples, and choose a threshold giving the
1 ? ?/2 quantile. Unfortunately, reliable FWER estimates derived via permutation testing come
at excessive (and often infeasible) computational cost ? often tens of thousands or even millions of
permutation samples are required, each of which requires a complete pass over the entire data set.
This step alone can run from a few days up to many weeks and even longer [14, 15].
Observe that the very same dependencies between voxels, that forced the usage of permutation
testing, indicate that the overwhelming majority of work in computing so many highly correlated
Null statistics is redundant. Note that regardless of their description, strong dependencies of almost
any kind will tend to concentrate most of their co-variation into a low-rank subspace, leaving a
high-rank, low-variance residual [5]. In fact, for Genome wide Association studies (GWAS), many
strategies calculate the ?effective number? (Meff ) of independent tests corresponding to the rank of
this subspace [16, 5]. This paper is based on the observation that such a low-rank structure must also
appear in permutation test samples. Using ideas from online low-rank matrix completion [17] we can
sample a few of the Null statistics and reconstruct the remainder as long as we properly account for
the residual. This allows us to sub-sample at extremely low rates, generally < 1%. The contribution
of our work is to significantly speed up permutation testing in neuroimaging, delivering running time
improvements of up to 50?. In other words, our algorithm does the same job as permutation testing,
but takes anywhere from a few minutes up to a few hours, rather than days or weeks. Further, based
on recent work in random matrix theory, we provide an analysis which sheds additional light on
the use of matrix completion methods in this context. To ensure that our conclusions are not an
artifact of a specific dataset, we present strong empirical evidence via evaluations on four separate
neuroimaging datasets of Alzheimer?s disease (AD) and Mild Cognitive Impairment (MCI) patients
as well as cognitively healthy age-matched controls (CN), showing that the proposed method can
recover highly faithful Global Null distributions, while offering substantial speedups.
2
2
The Proposed Algorithm
We first cover some basic concepts underlying permutation testing and low rank matrix completion
in more detail, before presenting our algorithm and the associated analysis.
2.1
Permutation testing
Randomly sampled permutation testing [18] is a methodology for drawing samples under the Global
(Family-Wise) Null hypothesis. Recall that although point-wise test statistics have well characterized univariate Null distributions, the sample maximum usually has no analytic form due to the
strong correlations across voxels. Permutation is particularly desirable in this setting because it is
free of any distribution assumption whatsoever [12]. The basic idea of permutation testing is very
simple, yet extremely powerful. Suppose we have a set of labeled high dimensional data points,
and a univariate test statistic which measures some interaction between labeled groups for every
dimension (or feature). If we randomly permute the labels and recalculate each test statistic, then
by construction we get a sample from the Global Null distribution. The maximum over all of these
statistics for every permutation sample is then used to construct a histogram, which therefore is a
non-parametric estimate of the distribution of the sample maximum of Null statistics. For a test
statistic derived from the real labels, the FWER corrected p-value is then equal to the fraction of
permutation samples which were more extreme. Note that all of the permutation samples can be
assembled into a matrix P ? Rv?T where v is the number of comparisons (voxels for images), and
T is the number of permutation samples.
There is a drawback to this approach, however. Observe that it is in the nature of random sampling
methods that we get many samples from near the mode(s) of the distribution of interest, but fewer
from the tails. Hence, to characterize the threshold for a small portion of the tail of this distribution,
we must draw a very large number of samples just so that the estimate converges. Thus, if we want
an ? = 0.01 threshold from the Null sample maximum distribution, we require many thousands of
permutation samples ? each requires randomizing the labels and recalculating all test statistics, a
very computationally expensive procedure when v is large. To be certain, we would like to ensure
an especially low FWER by first setting ? very low, and then getting a very precise estimate of the
corresponding threshold. The smallest possible p-value we can derive this way is 1/T , so for very
low p-values, T must be very large.
2.2
Low-rank Matrix completion
Low-rank matrix completion [19] seeks to reconstruct missing entries from a matrix, given only a
small fraction of its entries. The problem is ill-posed unless we assume this matrix has a low-rank
column space. If so, then a much smaller number of observations, on the order of r log(v), where
r is the column space?s rank, and v is its ambient dimension [19] is sufficient to recover both an
orthogonal basis for the row space as well as the expansion coefficients for each column, giving the
recovery. By placing an `1 -norm penalty on the eigenvalues of the recovered matrix via the nuclear
norm [20, 21] we can ensure that the solution is as low rank as possible. Alternatively, we can
specify a rank r ahead of time, and estimate an orthogonal basis of that rank by following a gradient
along the Grassmannian manifold [22, 17]. Denoting the set of randomly subsampled entries as ?,
the matrix completion problem is given as,
? ? k2F
min kP? ? P
?
P
? = UW; U is orthogonal
s.t. P
(1)
where U ? Rv?r is the low-rank basis of P, ? gives the measured entries, and W is the set of
? in U. Two recent methods operate in an online setting,
expansion coefficients which reconstructs P
i.e., where rows of P arrive one at a time, and both U and W are updated accordingly [22, 17].
2.3
Low rank plus a long tail
Real-world data often have a dominant low-rank component. While the data may not be exactly
characterized by a low-rank basis, the residual will not significantly alter the eigen-spectrum of the
sample covariance in such cases. Having strong correlations is nearly synonymous with having a
3
skewed eigen-spectrum, because the flatter the eigen-spectrum becomes, the sparser the resulting
covariance matrix tends to be (the ?uncertainty principle? between low-rank and sparse matrices
[23]). This low-rank structure carries through for purely linear statistics (such as sample means).
However, non-linearities in the test statistic calculation, e.g., normalizing by pooled variances, will
contribute a long tail of eigenvalues, and so we require that this long tail will either decay rapidly,
or that it does not overlap with the dominant eigenvalues. For t-statistics, the pooled variances are
unlikely to change very much from one permutation sample to another (barring outliers) ? hence
we expect that the spectrum of P will resemble that of the data covariance, with the addition of a
long, exponentially decaying tail. More generally, if the non-linearity does not de-correlate the test
statistics too much, it will preserve the low-rank structure.
If this long tail is indeed dominated by the low-rank structure, then its contribution to P can be
modeled as a low variance Gaussian i.i.d. residual. A Central Limit argument appeals to the number
of independent eigenfunctions that contribute to this residual, and, the orthogonality of eigenfunctions implies that as more of them meaningfully contribute to each entry in the residual, the more
independent those entries become. In other words, if this long tail begins at a low magnitude and
decays slowly, then we can treat it as a Gaussian i.i.d. residual; and if it decays rapidly, then the
residual will perhaps be less Gaussian, but also more negligible. Thus, our development in the next
section makes no direct assumption about these eigenvalues themselves, but rather that the residual
corresponds to a low-variance i.i.d. Gaussian random matrix ? its contribution to the covariance of
test statistics will be Wishart distributed, and from that we can characterize its eigenvalues.
2.4
Our Method
It still remains to model the residual numerically. By sub-sampling we can reconstruct the low-rank
portion of P via matrix completion, but in order to obtain the desired sample maximum distribution we must also recover the residual. Exact recovery of the residual is essentially impossible;
fortunately, for our purposes we need only need its effect on the distribution of the maximum per
permutation test. So, we estimate its variance, (its mean is zero by assumption,) and then randomly
sample from that distribution to recover the unobserved remainder of the matrix.
A large component in the running time of online subspace tracking algorithms is spent in updating the basis set U; yet, once a good estimate for U has been found this becomes superfluous.
We therefore divide the entire process into two steps: training, and recovery. During the training
phase we conduct a small number of fully sampled permutation tests (100 permutations in our experiments). From these permutation tests, we estimate U using sub-sampled matrix completion
methods [22, 17], making multiple passes over the training set (with fixed sub-sampling rate), until
convergence. In our evaluations, three passes sufficed. Then, we obtain a distribution of the residual
S over the entire training set. Next is the recovery phase, in which we sub-sample a small fraction
of the entries of each successive column t, solve for the reconstruction coefficients W(?, t) in the
basis U by least-squares, and then add random residuals using parameters estimated during training.
After that, we proceed exactly as in a normal permutation testing, to recover the statistics.
Bias-Variance tradeoff. By using a very sparse subsampling method, there is a bias-variance
dilemma in estimating S. That is, if we use the entire matrix P to estimate U, W and S, we
will obtain reliable estimates of S. But, there is an overfitting problem: the least-squares objective
used in fitting W(?, t) to such a small sample of entries is likely to grossly underestimate the variance of S compared to where we use the entire matrix; (the sub-sampling problem is not nearly as
over-constrained as for the whole matrix). This sampling artifact reduces the apparent variance of S,
and induces a bias in the distribution of the sample maximum, because extreme values are found less
frequently. This sampling artifact has the effect of ?shifting? the distribution of the sample maximum
towards 0. We correct for this bias by estimating the amount of the shift during the training phase,
and then shifting the recovered sample max distribution by this estimated amount.
3
Analysis
We now discuss two results which show that as long as the variance of the residual is below a certain
level, we can recover the distribution of the sample maximum. Recall from (1) that for low-rank
matrix completion methods to be applied we must assume that the permutation matrix P can be
4
decomposed into a low-rank component plus a high-rank residual matrix S:
P = UW + S,
(2)
where U is a v ? r orthogonal matrix that spans the r min(v, t) -dimensional column subspace
of P, and W is the corresponding coefficient matrix. We can then treat the residual S as a random
matrix whose entries are i.i.d. zero-mean Gaussian with variance ? 2 . We arrive at our first result
by analyzing how the low-rank portion of P?s singular spectrum interlaces with the contribution
coming from the residual by treating P as a low-rank perturbation of a random matrix. If this
low-rank perturbation is sufficient to dominate the eigenvalues of the random matrix, then P can
be recovered with high fidelity at a low sampling rate [22, 17]. Consequently, we can estimate the
distribution of the maximum as well, as shown by our second result.
The following development relies on the observation that the eigenvalues of PPT are the squared
singular values of P. Thus, rather than analyzing the singular value spectrum of P directly, we can
analyze the eigenvalues of PPT using a recent result from [24]. This is important because in order
to ensure recovery of P, we require that its singular value spectrum will approximately retain the
shape of UW?s. More precisely, we require that for some 0 < ? < 1,
|??i ? ?i | < ??i
??i < ??r
i = 1, . . . , r;
i = r + 1, . . . , v
(3)
where ?i and ??i are the singular values of UW and P respectively. (Recall that in this analysis P
is the perturbation of UW.) Thm. 3.1 relates the rate at which eigenvalues are perturbed, ?, to the
parameterization of S in terms of ? 2 . The theorem?s principal assumption also relates ? 2 inversely
with the number of columns of P, which is just the number of trials t. Note however that the process
may be split up between several matrices Pi , and the results can then be combined. For purposes
of applying this result in practice we may then choose a number of columns t which gives the best
bound. Theorem 3.1 also assumes that the number of trials t is greater than the number of voxels
v, which is a difficult regime to explore empirically. Thus, our numerical evaluations cover the case
where t < v, while Thm 3.1 covers the case where t is larger.
From the definition of P in (2), we have,
PPT = UWWT UT + SST + UWST + SWT UT .
T
(4)
T
T
We first analyze the change in eigenvalue structure of SS when perturbed by UWW U , (which
has r non-zero eigenvalues). The influence of the cross-terms (UWST and SWT UT ) is addressed
later. Thus, we have the following theorem.
Theorem 3.1. Denote that r non-zero eigenvalues of Q = UWWT UT ? Rv?v by ?1 ? ?2 ?
, . . . , ?r > 0; and let S be a v ? t random matrix such that Si,j ? N (0, ? 2 ), with unknown ? 2 . As
v, t ? ? such that vt 1, the eigenvalues ??i of the perturbed matrix Q + SST will satisfy
|??i ? ?i | < ??i
??i < ??r
i = 1, . . . , r;
for some 0 < ? < 1, whenever ? 2 <
i = r + 1, . . . , v
(?)
??r
t
Proof. (Sketch) The proof proceeds by constructing the asymptotic eigenvalues ??i (for i = 1, . . . , v),
and later bounding them to satisfy (?). The construction of ??i is based on Theorem 2.1 from [24].
Firstly, an asymptotic spectral measure ? of 1t SST is calculated, followed by its Cauchy transform
2
?
G? (z). Using G? (z) and its functional inverse G?1
? (?), we get ?i in terms of ?i , ? , v and t. Finally,
the constraints in (?) are applied to ??i to upper bound ? 2 . The supplement includes the proof.
Note that the missing cross-terms would not change the result of Theorem 3.1 drastically, because
UW has r non-zero singular values and hence UWST is a low-rank projection of a low-variance
random matrix, and this will clearly be dominated by either of the other terms. Having justified
the model in (2), the following thorem shows that the empirical distribution of the maximum Null
statistic approximates the true distribution.
5
Theorem 3.2. Let mt = maxi Pi,t be the maximum observed test statistic at permutation trial
? i,t be the maximum reconstructed test statistic. Further, let the
t, and similarly let m
? t = maxi P
? i,t | ? . Then, for any real number k > 0,
maximum reconstruction error be , such that |Pi,t ? P
we have,
h
i
1
Pr mt ? m
? t ? (b ? ?b) > k < 2
k
?
where b is the bias term described in Section 2, and b is its estimate from the training phase.
The result is an application of Chebyshev?s bound. The complete proof is given in the supplement.
4
Experimental evaluations
Our experimental evaluations include four separate neuroimaging datasets of Alzheimer?s Disease
(AD) patients, cognitively healthy age-matched controls (CN), and in some cases Mild Cognitive
Impairment (MCI) patients. The first of these is the Alzheimer?s Disease Neuroimaging Initiative
(ADNI) dataset, a nation-wide multi-site study. ADNI is a landmark study sponsored by the NIH,
major pharmaceuticals and others to determine the extent to which multimodal brain imaging can
help predict onset, and monitor progression of AD. The others were collected as part of other studies
of AD and MCI. We refer to these datasets as Dataset A?D. Their demographic characteristics are
as follows: Dataset A: 40 subjects, AD vs. CN, median age : 76; Dataset B: 50 subjects, AD vs.
CN, median age : 68; Dataset C: 55 subjects, CN vs. MCI, median age : 65.16; Dataset D: 70
subjects, CN vs. MCI, median age : 66.24.
Our evaluations focus on three main questions: (i) Can we recover an acceptable approximation
of the maximum statistic Null distribution from an approximation of the permutation test matrix?
(ii) What degree of computational speedup can we expect at various subsampling rates, and how
does this affect the trade-off with approximation error? (iii) How sensitive is the estimated ?-level
threshold with respect to the recovered Null distribution? In all our experiments, the rank estimate
for subspace tracking (to construct the low?rank basis U) was taken as the number of subjects.
4.1
Can we recover the Maximum Null?
Our experiments suggest that our model can recover the maximum Null. We use Kullback?Leibler
(KL) divergence and Bhattacharya Distance (BD) to compare the estimated maximum Null from our
model to the true one. We also construct a ?Naive?Null?, where the subsampled statistics are pooled
and the Null distribution is constructed with no further processing (i.e., completion). Using this as
a baseline, Fig. 1 shows the KL and BD values obtained from three datasets, at 20 different subsampling rates (ranging from 0.1% to 10%). Note that our model involves a training module where
the approximate ?bias? of residuals is estimated. This estimation is prone to noise (for example,
number of training frames). Hence Fig. 1 also shows the error bars pertaining to 5 realizations
on the 20 sampling rates. The first observation from Fig. 1 is that both KL and BD measures of
the recovered Null to the true distribution are < e?5 for sampling rates more than 0.4%. This
(a) Dataset A
(b) Dataset B
(c) Dataset C
Figure 1: KL (blue) and BD (red) measures between the true max Null distribution (given by the full matrix
P) and that recovered by our method (thick lines), along with the baseline naive subsampling method (dotted
lines). Results for Datasets A, B, C are shown here. Plot for Dataset D is in the extended version of the paper.
6
(a) Speedup (at 0.4%) is 45.1
(b) Speedup (at 0.4%) is 45.6
(c) Speedup (at 0.4%) is 48.5
Figure 3: Computation time (in minutes) of our model compared to that of computing the entire matrix P.
Results are for the same three datasets as in Fig. 1. Please find the plot for Dataset D in the extended version of
the paper. The horizontal line (magenta) shows the time taken for computing the full matrix P. The other three
curves include : subsampling (blue), GRASTA recovery (red) and total time taken by our model (black). Plots
correspond to the low sampling regime (< 1%) and note the jump in y?axis (black boxes). For reference, the
speedup factor at 0.4% sampling rate is reported at the bottom of each plot.
suggests that our model recovers both the shape (low BD) and position (low KL) of the null to high
accuracy at extremely low sub-sampling. We also see that above a certain minimum subsampling
rate (? 0.3%), the KL and BD do not change drastically as the rate is increased. This is expected
from the theory on matrix completion where after observing a minimum number of data samples,
adding in new samples does not substantially increase information content. Further, the error bars
(although very small in magnitude) of both KL and BD show that the recovery is noisy. We believe
this is due to the approximate estimate of bias from training module.
4.2
What is the computational speedup?
Our experiments suggest that the speedup is
substantial. Figs. 3 and 2 compare the time
taken to perform the complete permutation testing to that of our model. The three plots in Fig.
3 correspond to the datasets used in Fig. 1, in
that order. Each plot contains 4 curves and represent the time taken by our model, the corresponding sampling and GRASTA [17] recovery
(plus training) times and the total time to construct the entire matrix P (horizontal line). And
Fig. 2 shows the scatter plot of computational
speedup vs. KL divergence (over 3 repeated set
of experiments on all the datasets and sampling
rates). Our model achieved at least 30 times decrease in computation time in the low sampling
regime (< 1%). Around 0.5% ? 0.6% sub- Figure 2: Scatter plot of computational speedup vs.
sampling (where the KL and BD are already KL. The plot corresponds to the 20 different samplings
< e?5 ), the computation speed-up factor aver- on all 4 datasets (for 5 repeated set of experiments) and
aged over all datasets was 45?. This shows that the colormap is from 0.1% to 10% sampling rate. The
our model achieved good accuracy (low KL and x?axis is in log scale.
BD) together with high computational speed up
in tandem, especially, for 0.4% ? 0.7% sampling rates. However note from Fig. 2 that there is
a trade?off between the speedup factor and approximation error (KL or BD). Overall the highest
computational speedup factor achieved at a recovery level of e?5 on KL and BD is around 50x (and
this occured around 0.4% ? 0.5% sampling rate, refer to Fig. 2). It was observed that a speedup
factor of upto 55? was obtained for Datasets C and D at 0.3% subsampling, where the KL and BD
were as low as e?5.5 (refer to Fig. 1 and the extended version of the paper).
4.3
How stable is the estimated ?-threshold (clinical significance)?
Our experiments suggest that the threshold is stable. Fig. 4 and Table 1 summarize the clinical significance of our model. Fig. 4 show the error in estimating the true max thresh7
(a) Datasets A, B
(b) Datasets C, D
Figure 4: Error of estimated t statistic thresholds (red) for the 20 different subsampling rates on the four
Datasets. The confidence level is 1 ? ? = 0.95. The y-axis is in log?scale. For reference, the thresholds given
by baseline model (blue) are included. Note that each plot corresponds to two datasets.
old, at 1 ? ? = 0.95 level of confidence. The x?axis corresponds to the 20 different
sampling rates used and y?axis shows the absolute difference of thresholds in log scale.
Observe that for sampling rates higher than 3%,
the mean and maximum differences was 0.04
Data
Sampling
1 ? ? level
and 0.18. Note that the binning resolution of
name
rate
0.95 0.99 0.995 0.999
max.statistic used for constructing the Null was
0.3%
0.16 0.11
0.14
0.07
A
0.5%
0.13 0.08
0.10
0.03
0.01. These results show that not only the
0.3%
0.02 0.05
0.03
0.13
global shape of the maximum Null distribution
B
0.5%
0.02 0.07
0.08
0.04
is estimated to high accuracy (see Section 4.1)
0.3%
0.04 0.13
0.21
0.20
but also the shape and area in the tail. To supC
0.5%
0.01 0.07
0.07
0.05
port this observation, we show the absolute dif0.3%
0.08
0.10
0.27
0.31
ferences of the estimated thresholds on all the
D
0.5%
0.12 0.13
0.25
0.22
datasets at 4 different ? levels in Table 1. The
errors for 1 ? ? = 0.95, 0.99 are at most 0.16. Table 1: Errors of estimated t statistic thresholds on
The increase in error for 1 ? ? > 0.995 is a all datasets at two different subsampling rates.
sampling artifact and is expected. Note that in
a few cases, the error at 0.5% is slightly higher than that at 0.3% suggesting that the recovery is
noisy (see Sec. 4.1 and the errorbars of Fig. 1). Overall the estimated ?-thresholds are both faithful
and stable.
5
Conclusions and future directions
In this paper, we have proposed a novel method of efficiently approximating the permutation testing
matrix by first estimating the major singular vectors, then filling in the missing values via matrix
completion, and finally estimating the distribution of residual values. Experiments on four different neuroimaging datasets show that we can recover the distribution of the maximum Null statistic
to a high degree of accuracy, while maintaining a computational speedup factor of roughly 50?.
While our focus has been on neuroimaging problems, we note that multiple testing and False Discovery Rate (FDR) correction are important issues in genomic and RNA analyses, and our contribution may offer enhanced leverage to existing methodologies which use permutation testing in these
settings[6].
Acknowledgments: We thank Robert Nowak, Grace Wahba, Moo K. Chung and the anonymous reviewers for
their helpful comments, and Jia Xu for helping with a preliminary implementation of the model. This work
was supported in part by NIH R01 AG040396; NSF CAREER grant 1252725; NSF RI 1116584; Wisconsin
Partnership Fund; UW ADRC P50 AG033514; UW ICTR 1UL1RR025011 and a Veterans Administration
Merit Review Grant I01CX000165. Hinrichs is supported by a CIBM post-doctoral fellowship via NLM grant
2T15LM007359. The contents do not represent views of the Dept. of Veterans Affairs or the United States
Government.
8
References
[1] J. Ashburner and K. J. Friston. Voxel-based morphometry?the methods. NeuroImage, 11(6):805?821,
2000.
[2] J. Ashburner and K. J. Friston. Why voxel-based morphometry should be used. NeuroImage, 14(6):1238?
1243, 2001.
[3] P. H. Westfall and S. S. Young. Resampling-based multiple testing: examples and methods for p-value
adjustment, volume 279. Wiley-Interscience, 1993.
[4] J. M. Bland and D. G. Altman. Multiple significance tests: the bonferroni method. British Medical
Journal, 310(6973):170, 1995.
[5] J. Li and L. Ji. Adjusting multiple testing in multilocus analyses using the eigenvalues of a correlation
matrix. Heredity, 95(3):221?227, 2005.
[6] J. Storey and R. Tibshirani. Statistical significance for genomewide studies. Proceedings of the National
Academy of Sciences, 100(16):9440?9445, 2003.
[7] H. Finner and V. Gontscharuk. Controlling the familywise error rate with plug-in estimator for the proportion of true null hypotheses. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
71(5):1031?1048, 2009.
[8] J. T. Leek and J. D. Storey. A general framework for multiple testing dependence. Proceedings of the
National Academy of Sciences, 105(48):18718?18723, 2008.
[9] S. Clarke and P. Hall. Robustness of multiple testing procedures against dependence. The Annals of
Statistics, pages 332?358, 2009.
[10] S. Garc??a, A. Fern?andez, J. Luengo, and F. Herrera. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis
of power. Information Sciences, 180(10):2044?2064, 2010.
[11] Y. Ge, S. Dudoit, and T. P. Speed. Resampling-based multiple testing for microarray data analysis. Test,
12(1):1?77, 2003.
[12] T. Nichols and S. Hayasaka. Controlling the familywise error rate in functional neuroimaging: a comparative review. Statistical Methods in Medical Research, 12:419?446, 2003.
[13] K. D. Singh, G. R. Barnes, and A. Hillebrand. Group imaging of task-related changes in cortical synchronisation using nonparametric permutation testing. NeuroImage, 19(4):1589?1601, 2003.
[14] D. Pantazis, T. E. Nichols, S. Baillet, and R. M. Leahy. A comparison of random field theory and permutation methods for the statistical analysis of meg data. NeuroImage, 25(2):383?394, 2005.
[15] B. Gaonkar and C. Davatzikos. Analytic estimation of statistical significance maps for support vector
machine based multi-variate image analysis and classification. NeuroImage, 78:270?283, 2013.
[16] J. M. Cheverud. A simple correction for multiple comparisons in interval mapping genome scans. Heredity, 87(1):52?58, 2001.
[17] J. He, L. Balzano, and A. Szlam. Incremental gradient on the grassmannian for online foreground and
background separation in subsampled video. In CVPR, 2012.
[18] M. Dwass. Modified randomization tests for nonparametric hypotheses. The Annals of Mathematical
Statistics, 28(1):181?187, 1957.
[19] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[20] M. Fazel, H. Hindi, and S. Boyd. Rank minimization and applications in system theory. In American
Control Conference, volume 4, 2004.
[21] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. Arxiv Preprint, 2007. arxiv:0706.4138.
[22] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. Arxiv Preprint, 2007. arxiv:1006.4046.
[23] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and Willsky A. S. Rank-sparsity incoherence for matrix
decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011.
[24] F. Benaych-Georges and R. R. Nadakuditi. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Advances in Mathematics, 227(1):494?521, 2011.
9
| 4924 |@word mild:3 trial:6 version:3 norm:3 proportion:1 seek:1 decomposition:2 contraction:1 covariance:4 carry:2 contains:1 series:1 united:1 offering:1 denoting:1 mmse:1 longitudinal:1 existing:1 recovered:7 si:1 yet:2 scatter:2 must:5 bd:12 moo:1 numerical:1 shape:4 analytic:2 designed:1 treating:1 sponsored:1 plot:10 v:6 alone:1 resampling:2 prohibitive:1 metabolism:1 fewer:2 parameterization:1 accordingly:1 intelligence:1 affair:1 contribute:3 location:1 successive:1 firstly:1 mathematical:1 along:2 constructed:1 direct:1 become:2 initiative:1 fitting:1 interscience:1 theoretically:1 acquired:1 expected:2 indeed:2 roughly:2 themselves:1 frequently:1 cand:1 multi:2 brain:4 decomposed:1 overwhelming:1 increasing:1 becomes:2 begin:1 estimating:7 matched:2 underlying:1 linearity:2 tandem:1 null:24 what:3 kind:1 substantially:2 developed:1 whatsoever:1 unobserved:1 synchronisation:1 every:2 nation:1 growth:1 shed:1 exactly:2 colormap:1 control:3 szlam:1 grant:3 acceptably:1 appear:1 medical:2 supc:1 before:1 negligible:1 local:1 treat:2 tends:1 limit:1 vsingh:1 analyzing:3 incoherence:1 approximately:2 might:1 plus:4 black:2 doctoral:1 suggests:1 co:2 statistically:1 fazel:2 faithful:2 acknowledgment:1 testing:29 union:1 practice:1 implement:1 procedure:2 area:1 empirical:2 drug:4 neurodegenerative:2 significantly:2 projection:1 boyd:1 word:2 induce:1 confidence:2 suggest:3 get:3 close:1 context:1 impossible:1 applying:1 influence:1 restriction:1 middleton:1 map:4 missing:3 reviewer:1 fund:1 regardless:1 convex:1 resolution:1 disorder:2 recovery:10 estimator:2 nuclear:2 dominate:1 leahy:1 scj:1 anyway:1 variation:2 altman:1 updated:1 annals:2 construction:2 suppose:2 enhanced:1 controlling:2 exact:2 hypothesis:5 crossing:1 expensive:1 particularly:1 updating:1 labeled:2 binning:1 observed:2 bottom:1 module:2 preprint:2 worst:1 calculate:2 thousand:2 region:1 ensures:1 recalculate:1 sun:1 trade:2 decrease:1 highest:1 disease:4 substantial:2 singh:2 tight:1 purely:1 dilemma:1 basis:7 multimodal:1 joint:1 various:1 forced:1 effective:1 kp:1 detected:1 pertaining:1 outcome:1 pearson:1 quite:2 apparent:1 posed:1 solve:1 larger:1 say:1 drawing:1 otherwise:1 reconstruct:3 s:1 cvpr:1 statistic:34 transform:1 noisy:2 online:5 eigenvalue:16 propose:1 reconstruction:2 interaction:1 coming:1 remainder:2 relevant:1 realization:1 rapidly:2 academy:2 description:1 getting:1 convergence:1 assessing:1 comparative:1 incremental:1 converges:1 spent:1 depending:1 derive:1 completion:14 help:1 measured:1 job:1 strong:5 recovering:1 c:2 resemble:1 implies:2 come:1 indicate:1 involves:1 concentrate:1 direction:1 thick:1 drawback:1 correct:2 nlm:1 garc:1 require:5 government:1 andez:1 investigation:1 anonymous:1 preliminary:1 randomization:1 helping:1 correction:7 around:3 hall:1 normal:1 mapping:1 week:2 predict:1 genomewide:1 major:2 smallest:1 purpose:2 estimation:2 applicable:1 label:3 healthy:2 sensitive:1 ictr:1 faithfully:1 minimization:2 clearly:1 genomic:1 gaussian:6 rna:1 modified:1 rather:3 derived:2 focus:2 improvement:2 properly:1 rank:36 check:1 mainly:1 contrast:1 baseline:3 helpful:1 dependent:1 synonymous:1 membership:1 typically:1 familywise:4 entire:7 unlikely:1 spurious:1 tao:1 issue:3 fidelity:2 ill:1 overall:2 classification:1 development:2 smoothing:1 constrained:1 equal:1 construct:5 never:1 having:3 barring:1 sampling:25 once:1 field:1 placing:1 survive:1 filling:1 nearly:4 excessive:1 alter:1 k2f:1 future:1 sanghavi:1 others:2 foreground:1 few:5 ppt:3 randomly:4 preserve:1 divergence:2 national:2 pharmaceutical:2 cognitively:2 subsampled:3 sterling:1 phase:4 william:1 interest:3 highly:5 mining:1 evaluation:7 severe:1 certainly:1 recalculating:1 extreme:2 genotype:2 light:1 superfluous:1 ambient:1 nowak:2 adrc:1 orthogonal:4 unless:1 conduct:1 incomplete:1 divide:1 old:1 nadakuditi:1 desired:1 sacrificing:1 deformation:1 increased:1 column:8 earlier:1 cover:3 localizing:1 cost:2 placebo:2 parametrically:1 entry:9 johnson:1 front:1 too:1 characterize:2 reported:1 dependency:5 randomizing:1 perturbed:3 combined:1 recht:2 density:1 siam:1 retain:1 off:2 together:1 squared:1 central:1 satisfied:1 reconstructs:1 choose:3 slowly:1 wishart:1 cognitive:4 american:1 chung:1 li:1 account:1 exclude:1 suggesting:1 de:1 parrilo:2 flatter:1 pooled:3 includes:1 coefficient:4 matter:1 sec:1 satisfy:2 ad:7 onset:1 later:2 view:1 analyze:2 observing:1 portion:3 sufficed:1 recover:10 decaying:1 red:3 gwas:1 jia:1 contribution:5 ass:1 square:2 t15lm007359:1 accuracy:5 variance:14 characteristic:1 efficiently:2 yield:1 identify:1 correspond:2 identification:1 accurately:1 populating:1 fern:1 biostat:1 drive:1 whenever:1 ashburner:2 definition:1 failure:1 grossly:1 underestimate:1 ul1rr025011:1 against:1 associated:1 proof:4 recovers:1 sampled:4 therapeutic:1 dataset:12 treatment:5 adjusting:1 recall:3 ut:4 occured:1 higher:2 day:2 pantazis:1 methodology:4 specify:1 box:1 symptom:1 anywhere:1 just:2 correlation:4 until:1 hand:1 whose:1 sketch:1 horizontal:2 mode:1 artifact:4 indicated:1 gray:1 perhaps:1 believe:1 usage:1 effect:8 name:1 concept:1 true:11 unbiased:1 nichols:2 hence:4 assigned:1 leibler:1 skewed:1 bonferroni:8 during:3 please:1 fwer:14 grasta:2 presenting:1 complete:3 p50:1 image:7 wise:7 ranging:1 novel:2 nih:2 functional:3 mt:2 empirically:1 ji:1 vamsi:3 exponentially:1 cerebral:1 million:2 association:1 slight:1 tail:9 approximates:1 numerically:1 volume:2 davatzikos:1 significant:2 refer:3 he:1 heredity:2 mathematics:1 similarly:1 herrera:1 stable:4 longer:1 add:1 dominant:2 recent:3 moderate:1 scenario:1 certain:6 vt:1 minimum:3 additional:1 relaxed:1 fortunately:1 greater:1 george:1 determine:1 redundant:1 signal:1 ii:1 rv:3 multiple:12 desirable:1 relates:2 reduces:1 full:2 baillet:1 memorial:1 characterized:2 plug:1 calculation:1 offer:2 clinical:3 long:9 cross:2 adni:2 post:1 equally:1 bland:1 va:1 controlled:2 basic:2 patient:3 essentially:1 arxiv:4 histogram:2 represent:2 achieved:4 justified:1 addition:2 want:2 fellowship:1 morphometry:2 addressed:1 interval:1 background:1 singular:7 source:1 leaving:1 modality:1 median:4 aged:1 operate:1 microarray:1 benaych:1 eigenfunctions:2 pass:2 subject:10 tend:1 comment:1 meaningfully:1 alzheimer:4 near:2 leverage:1 cohort:1 split:1 iii:1 affect:2 variate:2 wahba:1 idea:2 cn:6 tradeoff:1 chebyshev:1 administration:1 shift:1 bottleneck:1 penalty:1 proceed:1 impairment:2 workflow:1 generally:2 luengo:1 clear:1 delivering:1 sst:3 eigenvectors:1 amount:3 nonparametric:3 ten:1 induces:1 http:1 problematic:1 nsf:2 dotted:1 estimated:13 per:1 tibshirani:1 anatomical:1 blue:3 affected:1 group:5 four:5 threshold:22 enormous:1 monitor:1 localize:1 wisc:5 prevent:1 anova:1 uw:8 imaging:5 relaxation:1 fraction:3 run:1 inverse:1 powerful:1 uncertainty:1 multilocus:1 arrive:2 family:2 almost:2 chandrasekaran:1 separation:1 draw:1 acceptable:1 purported:1 clarke:1 bound:4 followed:1 guaranteed:1 barnes:1 ahead:1 precisely:2 orthogonality:1 constraint:1 ri:1 dominated:2 speed:4 argument:1 extremely:4 min:2 span:1 speedup:16 across:2 smaller:2 slightly:1 making:2 outlier:1 pr:1 taken:6 computationally:1 equation:1 mutually:1 remains:1 turn:1 discus:1 leek:1 merit:1 ge:1 demographic:1 observe:4 progression:1 away:1 spectral:1 upto:1 bhattacharya:1 alternative:1 robustness:1 eigen:3 vikas:1 assumes:1 running:2 ensure:4 subsampling:9 completed:2 include:2 maintaining:1 madison:1 atrophy:1 medicine:1 giving:3 quantile:1 especially:2 veteran:2 approximating:1 society:1 r01:1 objective:1 already:2 question:1 parametric:2 strategy:1 dependence:3 exclusive:1 swt:2 grace:1 gradient:2 subspace:6 distance:1 separate:3 grassmannian:2 thank:1 majority:1 landmark:1 chris:1 evaluate:1 manifold:1 cauchy:1 extent:1 collected:1 willsky:1 meg:1 modeled:1 difficult:2 neuroimaging:13 unfortunately:1 robert:1 implementation:1 design:1 fdr:1 unknown:1 contributed:1 perform:1 upper:1 ithapu:2 observation:6 datasets:19 finite:1 extended:3 precise:1 frame:1 perturbation:4 thm:2 required:1 kl:14 errorbars:1 registered:1 hour:1 assembled:1 address:1 bar:2 proceeds:2 usually:1 below:1 regime:3 sparsity:1 summarize:1 reliable:3 max:4 video:1 royal:1 shifting:2 power:3 suitable:1 critical:1 overlap:1 cibm:1 friston:2 residual:21 advanced:1 arm:1 hindi:1 balzano:2 inversely:1 axis:5 deemed:1 shortterm:1 naive:2 speeding:1 review:2 voxels:7 discovery:1 asymptotic:2 wisconsin:2 fully:1 expect:3 permutation:38 rationale:1 versus:1 age:6 degree:2 sufficient:3 principle:1 port:1 pi:3 row:2 prone:1 supported:2 free:1 infeasible:1 drastically:2 bias:7 weaker:1 wide:2 absolute:2 sparse:2 ag033514:1 distributed:1 curve:2 dimension:2 calculated:1 world:1 cortical:1 genome:2 ignores:1 author:1 commonly:1 jump:1 voxel:5 correlate:1 transaction:1 reconstructed:1 approximate:3 observable:1 kullback:1 global:5 overfitting:1 assumed:1 alternatively:2 spectrum:8 search:1 why:1 table:3 storey:2 promising:1 nature:1 career:1 permute:1 expansion:2 constructing:2 hinrichs:4 significance:6 main:1 whole:1 bounding:1 noise:1 repeated:2 xu:1 site:1 fig:14 slow:1 wiley:1 sub:9 position:1 neuroimage:5 aver:1 young:1 minute:2 theorem:7 magenta:1 british:1 specific:1 showing:1 maxi:2 appeal:1 decay:3 evidence:1 normalizing:1 burden:1 albeit:1 adding:1 effectively:1 false:1 supplement:2 magnitude:2 sparser:1 univariate:3 likely:2 explore:1 adjustment:1 tracking:3 corresponds:5 relies:1 sized:2 dudoit:1 consequently:1 towards:1 content:2 change:6 included:1 corrected:3 principal:1 conservative:1 total:2 hospital:1 pas:1 mci:5 experimental:3 e:1 support:1 partnership:1 scan:1 assessed:1 bioinformatics:1 dept:1 phenomenon:2 correlated:1 |
4,337 | 4,925 | Robust Multimodal Graph Matching:
Sparse Coding Meets Graph Matching
Marcelo Fiori
Universidad de la
Rep?ublica, Uruguay
[email protected]
Joshua Vogelstein
Duke University
Durham, NC 27708
[email protected]
Pablo Sprechmann
Duke University
Durham, NC 27708
[email protected]
Pablo Mus?e
Universidad de la
Rep?ublica, Uruguay
[email protected]
Guillermo Sapiro
Duke University
Durham, NC 27708
[email protected]
Abstract
Graph matching is a challenging problem with very important applications in a
wide range of fields, from image and video analysis to biological and biomedical
problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a non-smooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal
with weighted or unweighted graphs, as well as multimodal data, where different
graphs represent different types of data. The proposed approach is also naturally
integrated with collaborative graph inference techniques, solving general network
inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared
with state-of-the-art graph matching techniques in both synthetic and real graphs.
We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignment-free functional magnetic resonance
imaging (fMRI) data. The code is publicly available.
1
Introduction
Problems related to graph isomorphisms have been an important and enjoyable challenge for the
scientific community for a long time. The graph isomorphism problem itself consists in determining
whether two given graphs are isomorphic or not, that is, if there exists an edge preserving bijection
between the vertex sets of the graphs. This problem is also very interesting from the computational
complexity point of view, since its complexity level is still unsolved: it is one of the few problems in
NP not yet classified as P nor NP-complete (Conte et al., 2004). The graph isomorphism problem is
contained in the (harder) graph matching problem, which consists in finding the exact isomorphism
between two graphs. Graph matching is therefore a very challenging problem which has several
applications, e.g., in the pattern recognition and computer vision areas. In this paper we address the
problem of (potentially multimodal) graph matching when the graphs are not exactly isomorphic.
This is by far the most common scenario in real applications, since the graphs to be compared are
the result of a measuring or description process, which is naturally affected by noise.
Given two graphs GA and GB with p vertices, which we will characterize in terms of their p ? p
adjacency matrices A and B, the graph matching problem consists in finding a correspondence
between the nodes of GA and GB minimizing some matching error. In terms of the adjacency
matrices, this corresponds to finding a matrix P in the set of permutation matrices P, such that
it minimizes some distance between
PBPT . A common choice is the Frobenius norm
PA and
2
T 2
2
||A ? PBP ||F , where ||M||F = ij Mij . The graph matching problem can be then stated as
min ||A ? PBPT ||2F = min ||AP ? PB||2F .
P?P
P?P
1
(1)
The combinatorial nature of the permutation search makes this problem NP in general, although
polynomial algorithms have been developed for a few special types of graphs, like trees or planar
graphs for example (Conte et al., 2004).
There are several and diverse techniques addressing the graph matching problem, including spectral
methods (Umeyama, 1988) and problem relaxations (Zaslavskiy et al., 2009; Vogelstein et al., 2012;
Almohamad & Duffuaa, 1993). A good review of the most common approaches can be found in
Conte et al. (2004). In this paper we focus on the relaxation techniques for solving an approximate
version of the problem. Maybe the simplest one is to relax the feasible set (the permutation matrices)
to its convex hull, the set of doubly stochastic matrices D, which consist of the matrices with nonnegative entries such that each row and column sum up one: D = {M ? Rp?p : Mij ? 0, M1 =
1, MT 1 = 1}, 1 being the p-dimensional vector of ones. The relaxed version of the problem is
? = arg min ||AP ? PB||2 ,
P
F
P?D
which is a convex problem, though the result is a doubly stochastic matrix instead of a per?
mutation. The final node correspondence is obtained as the closest permutation matrix to P:
?
2
3
?
P = arg minP?P ||P ? P||F , which is a linear assignment problem that can be solved in O(p ) by
the Hungarian algorithm (Kuhn, 1955). However, this last step lacks any guarantee about the graph
matching problem itself. This approach will be referred to as QCP for quadratic convex problem.
One of the newest approximate methods is the PATH algorithm by Zaslavskiy et al. (2009), which
combines this convex relaxation with a concave relaxation. Another new technique is the FAQ
method by Vogelstein et al. (2012), which solves a relaxed version of the Quadratic Assignment
Problem. We compare the method here proposed to all these techniques in the experimental section.
The main contributions of this work are two-fold. Firstly, we propose a new and versatile formulation for the graph matching problem which is more robust to noise and can naturally manage
multimodal data. The technique, which we call GLAG for Group lasso graph matching, is inspired
by the recent works on sparse modeling, and in particular group and collaborative sparse coding.
We present several experimental evaluations to back up these claims. Secondly, this proposed formulation fits very naturally into the alignment-free collaborative network inference problem, where
we collaborative exploit non-aligned (possibly multimodal) data to infer the underlying common
network, making this application never addressed before to the best of our knowledge. We assess
this with experiments using real fMRI data.
The rest of this paper is organized as follows. In Section 2 we present the proposed graph matching
formulation, and we show how to solve the optimization problem in Section 3. The joint collaborative network and permutation learning application is described in Section 4. Experimental results
are presented in Section 5, and we conclude in Section 6.
2
Graph matching formulation
We consider the problem of matching two graphs that are not necessarily perfectly isomorphic. We
will assume the following model: Assume that we have a noise free graph characterized by an
adjacency matrix T. Then we want to match two graphs with adjacency matrices A = T + OA and
B = PT
o TPo + OB , where OA and OB have a sparse number of non-zero elements of arbitrary
magnitude. This realistic model is often used in experimental settings, e.g., (Zaslavskiy et al., 2009).
In this context, the QCP formulation tends to find a doubly stochastic matrix P which minimizes the
?average error? between AP and PB. However, these spurious mismatching edges can be thought
of as outliers, so we would want a metric promoting that AP and PB share the same active set (non
zero entries representing edges), with the exception of some sparse entries. This can be formulated
in terms of the group Lasso penalization (Yuan & Lin, 2006). In short, the group Lasso takes a set
of groups of coefficients and promotes that only some of these groups are active, while the others
remain zero. Moreover, the usual behavior is that when a group is active, all the coefficients in the
group are non-zero. In this particular graph matching application, we form p2 groups,
one per matrix
entry (i, j), each one consisting of the 2-dimensional vector (AP)ij , (PB)ij . The proposed cost
function is then the sum of the l2 norms of the groups:
X
(AP)ij , (PB)ij .
f (P ) =
(2)
2
i,j
2
Ideally we would like to solve the graph matching problem by finding the minimum of f over the
set of permutation matrices P. Of course this formulation is still computationally intractable, so we
solve the relaxed version, changing P by its convex hull D, resulting in the convex problem
? = arg min f (P).
P
(3)
P?D
?
As with the Frobenius formulation, the final step simply finds the closest permutation matrix to P.
Let us analyze the case when A and B are the adjacency matrices of two isomorphic undirected
unweighted graphs with e edges and no self-loops. Since the graphs are isomorphic, there exist a
permutation matrix Po such that A = Po BPT
o.
Lemma
? 1 Under the conditions stated above, the minimum value of the optimization problem (3)
is 2 2e and it is reached by Po , although the solution is not unique in general. Moreover, any
solution P of problem (3) satisfies AP = PB.
Proof: Let (a)k denote allp
the p2 entries of AP, and (b)k all the entries of PB. Then f (P) can be
P
re-written as f (P) = k a2k + b2k .
?
?
Observing that a2 + b2 ? 22 (a + b), we have
?
Xq
X 2
(ak + bk ) .
(4)
f (P ) =
a2k + b2k ?
2
k
k
Now, since P is doubly stochastic, the sum of all the entries of APPis equal to
Pthe sum of all
the entries?of A, which is two times the number of edges. Therefore k ak = k bk = 2e and
f (P) ? 2 2e.
The equality in (4) holds if and only if ak = bk for all k, which means that AP = PB. In particular,
this is true for the permutation Po , which completes the proof of all the statements.
This Lemma shows that the fact that the weights in A and B are not compared in magnitude does
not affect the matching performance when the two graphs are isomorphic and have equal weights.
On the other hand, this property places a fundamental role when moving away from this setting.
Indeed, since the group lasso tends to set complete groups to zero, and the actual value of the
non-zero coefficients is less important, this allows to group very dissimilar coefficients together,
if that would result in fewer active groups. This is even more evident when using the l? norm
instead of the l2 norm of the groups, and the optimization remains very similar to the one presented
below. Moreover, the formulation remains valid when both graphs come from different modalities,
a fundamental property when for example addressing alignment-free collaborative graph inference
as presented in Section 4 (the elegance with which this graph matching formulation fits into such
problem will be further stressed there). In contrast, the Frobenious-based approaches mentioned
in the introduction are very susceptible to differences in edge magnitudes and not appropriate for
multimodal matching1 .
3
Optimization
The proposed minimization problem (3) is convex but non-differentiable. Here we use an efficient
variant of the Alternating Direction Method of Multipliers (ADMM) (Bertsekas & Tsitsiklis, 1989).
The idea is to write the optimization problem as an equivalent artificially constrained problem, using
two new variables ?, ? ? Rp?p :
X
min
|| ?ij , ? ij ||2
s.t. ? = AP, ? = PB.
(5)
P?D
i,j
The ADMOM method generates a sequence which converges to the minimum of the augmented
Lagrangian of the problem:
X
c
c
L(P, ?, ?, U, V) =
|| ?ij , ? ij ||2 + ||? ? AP + U||2 + ||? ? PB + V||2 ,
2
2
i,j
1
If both graphs are binary and we limit to permutation matrices (for which there are no algorithms known to
find the solution in polynomial time), then the minimizers of (2) and (1) are the same (Vince Lyzinski, personal
communication).
3
where U and V are related to the Lagrange multipliers and c is a fixed constant.
The decoupling produced by the new artificial variables allows to update their values one at a time,
minimizing the augmented Lagrangian L. We first update the pair (?, ?) while keeping fixed
(P, U, V); then we minimize for P; and finally update U and V, as described next in Algorithm 1.
Input : Adjacency matrices A, B, c > 0.
Output: Permutation matrix P?
Initialize U = 0, V = 0, P = p1 1T 1
while stopping criterion is not satisfied
P do
(?t+1 , ? t+1 ) = arg min?,? i,j || ?ij , ? ij ||2 + 2c ||? ? APt + Ut ||2F + 2c ||? ? Pt B + Vt ||2F
Pt+1 = arg minP?D 21 ||?t+1 ? AP + Ut ||2F + 12 ||? t+1 ? PB + Vt ||2F
Ut+1 = Ut + ?t+1 ? APt+1
Vt+1 = Vt + ? t+1 ? Pt+1 B
end
P? = argminQ?P ||Q ? P||2F
Algorithm 1: Robust graph matching algorithm. See text for implementation details of each step.
The first subproblem is decomposable into p2 scalar problems (one for each matrix entry),
c
c
min || ?ij , ? ij ||2 + (?ij ? (APt )ij + Utij )2 + (? ij ? (Pt B)ij + Vtij )2 .
?ij ,? ij
2
2
From the optimality conditions on the subgradient of this subproblem, it can be seen that this can
be solved in closed hform by means
of the well know vector soft-thresholding operator (Yuan & Lin,
i
?
2006): Sv (b, ?) = 1 ? ||b||2 b .
+
The second subproblem is a minimization of a convex differentiable function over a convex set, so
general solvers can be chosen for this task. For instance, a projected gradient descent method can
be used. However, this would require to compute several projections onto D per iteration, which is
one of the computationally most expensive steps. Nevertheless, we can choose to solve a linearized
version of the problem while keeping the convergence guarantees of the algorithm (Lin et al., 2011).
In this case, the linear approximation of the first term is:
1
1
1 t+1
||?
? AP + Ut ||2F ? ||?t+1 ? APk + Ut ||2F + hgk , P ? Pk i +
||P ? Pk ||2F ,
2
2
2?
where gk = ?AT (?t+1 + Ut ? APk ) is the gradient of the linearized term, h?, ?i is the usual inner
product of matrices, and ? is any constant such that ? < ?(A1T A) , with ?(?) being the spectral norm.
The second term can be linearized analogously, so the minimization of the second step becomes
1
1
min ||P? Pk + ? AT (?t+1 + Ut ? APk ) ||2F + ||P? Pk + ? (? t+1 + Vt ? Pk B)BT ||2F
P?D 2
2
|
|
{z
}
{z
}
fixed matrix D
fixed matrix C
which is simply the projection of the matrix 12 (C + D) over D.
Summarizing, each iteration consists of p2 vector thresholdings when solving for (?, ?), one projection over D when solving for P, and two matrix multiplications for the update of U and V. The
code is publicly available at www.fing.edu.uy/?mfiori.
4
Application to joint graph inference of not pre-aligned data
Estimating the inverse covariance matrix is a very active field of research. In particular the inference
of the support of this matrix, since the non-zero entries have information about the conditional dependence between variables. In numerous applications, this matrix is known to be sparse, and in this
regard the graphical Lasso has proven to be a good estimator for the inverse covariance matrix (Yuan
& Lin, 2007; Fiori et al., 2012) (also for non-Gaussian data (Loh & Wainwright, 2012)). Assume
that we have a p-dimensional multivariate normal distributed variable X ? N (0, ?); let X ? Rk?p
be a data matrix containing k independent observations of X, and S its empirical covariance matrix.
The graphical Lasso estimator for ??1 is the matrix ? which solves the optimization problem
X
min tr(S?) ? log det ? + ?
|?ij | ,
(6)
?0
i,j
4
which corresponds to the maximum likelihood estimator for ??1 with an l1 regularization.
Collaborative network inference has gained a lot of attention in the last years (Chiquet et al., 2011),
specially with fMRI data, e.g., (Varoquaux et al., 2010). This problem consist of estimating two (or
?1
more) matrices ??1
A and ?B from data matrices XA and XB as above, with the additional prior
information that the inverse covariance matrices share the same support. The joint estimation of ?A
and ?B is performed by solving
X
B
?A
min
tr(SA ?A ) ? log det ?A + tr(SB ?B ) ? log det ?B + ?
ij , ?ij ||2 , (7)
?A 0,?B 0
i,j
where the first four terms correspond to the maximum likelihood estimators for ?A , ?B , and the
last term is the group Lasso penalty which promotes that ?A and ?B have the same active set.
This formulation relies on the limiting underlying assumption that the variables in both datasets
(the columns of XA and XB ) are in correspondence, i.e., the graphs determined by the adjacency
matrices ?A and ?B are aligned. However, this is in general not the case in practice. Motivated
by the formulation presented in Section 2, we propose to overcome this limitation by incorporating
a permutation matrix into the optimization problem, and jointly learn it on the estimation process.
The proposed optimization problem is then given by
X
(?A P)ij , (P?B )ij ||2 .
min tr(SA ?A ) ? log det ?A + tr(SB ?B ) ? log det ?B + ?
?A ,?B 0
P?P
i,j
(8)
Even after the relaxation of the constraint P ? P to P ? D, the joint minimization of (8) over
(?A , ?B ) and P is a non-convex problem. However it is convex when minimized only over
(?A , ?B ) or P leaving the other fixed. Problem (8) can be then minimized using a block-coordinate
descent type of approach, iteratively minimizing over (?A , ?B ) and P.
The first subproblem (solving (8) with P fixed) is a very simple variant of (7), which can be solved
very efficiently by means of iterative thresholding algorithms (Fiori et al., 2013). In the second
subproblem, since (?A , ?B ) are fixed, the only term to minimize is the last one, which corresponds
to the graph matching formulation presented in Section 2.
5
Experimental results
We now present the performance of our algorithm and compare it with the most recent techniques
in several scenarios including synthetic and real graphs, multimodal data, and fMRI experiments.
In the cases where there is a ?ground truth,? the performance is measured in terms of the matching
error, defined as ||Ao ? PBo PT ||2F , where P is the obtained permutation matrix and (Ao , Bo ) are
the original adjacency matrices.
5.1
Graph matching: Synthetic graphs
We focus here in the traditional graph matching problem of undirected weighted graphs, both with
and without noise. More precisely, let Ao be the adjacency matrix of a random weighted graph and
Bo a permuted version of it, generated with a random permutation matrix Po , i.e., Bo = PTo Ao Po .
We then add a certain number N of random edges to Ao with the same weight distribution as the
original weights, and another N random edges to Bo , and from these noisy versions we try to recover
the original matching (or any matching between Ao and Bo , since it may not be unique).
We show the results using three different techniques for the generation of the graphs: the Erd?osR?enyi model (Erd?os & R?enyi, 1959), the model by Barab?asi & Albert (1999) for scale-free graphs,
and graphs with a given degree distribution generated with the BTER algorithm (Seshadhri et al.,
2012). These models are representative of a wide range of real-world graphs (Newman, 2010). In
the case of the BTER algorithm, the degree distribution was generated according to a geometric law,
that is: Prob(degree = t) = (1 ? e?? )e?t .
We compared the performance of our algorithm with the technique by Zaslavskiy et al. (2009)
(referred to as PATH), the FAQ method described in Vogelstein et al. (2012), and the QCP approach.
5
Figure 1 shows the matching error as a function of the noise level for graphs with p = 100 nodes
(top row), and for p = 150 nodes (bottom row). The number of edges varies between 200 and 400
for graphs with 100 nodes, and between 300 and 600 for graphs with 150 nodes, depending on the
model. The performance is averaged over 100 runs. This figure shows that our method is more
stable, and consistently outperforms the other methods (considered state-of-the-art), specially for
noise levels in the low range (for large noise levels, is not clear what a ?true? matching is, and in
addition the sparsity hypothesis is no longer valid).
16
14
12
10
18
14
10
6
2
Matching error
Matching error
Matching error
18
14
10
6
2
0
5
10
15
Noise
20
25
0
(a) Erd?os-R?enyi graphs
5
10
15
Noise
20
25
0
(b) Scale-free graphs
5
10
15
Noise
20
25
30
25
30
(c) BTER graphs
14
7
5
3
10
Matching error
Matching error
9
Matching error
8
6
4
2
8
6
4
2
1
0
5
10
15
Noise
20
25
12
10
8
6
4
2
0
5
10
15
Noise
20
25
0
5
10
15
Noise
20
(d) Erd?os-R?enyi graphs
(e) Scale-free graphs
(f) BTER graphs
Figure 1: Matching error for synthetic graphs with p = 100 nodes (top row) and p = 150 nodes (bottom row).
In solid black our proposed GLAG algorithm, in long-dashed blue the PATH algorithm, in short-dashed red the
FAQ method, and in dotted black the QCP.
5.2
Graph matching: Real graphs
We now present similar experiments to those in the previous section but with real graphs. We use
the C. elegans connectome. Caenorhabditis elegans is an extensively studied roundworm, whose somatic nervous system consists of 279 neurons that make synapses with other neurons. The two types
of connections (chemical and electrical) between these 279 neurons have been mapped (Varshney
et al., 2011), and their corresponding adjacency matrices, Ac and Ae , are publicly available.
We match both the chemical and the electrical connection graphs against noisy artificially permuted
versions of them. The permuted graphs are constructed following the same procedure used in Section
5.1 for synthetic graphs. The weights of the added noise follow the same distribution as the original
weights. The results are shown in Figure 2. These results suggest that from the prior art, the PATH
algorithm is more suitable for the electrical connection network, while the FAQ algorithm works
better for the chemical one. Our method outperforms both of them for both types of connections.
5.3
Multimodal graph matching
One of the advantages of the proposed approach is its capability to deal with multimodal data. As
discussed in Section 2, the group Lasso type of penalty promotes the supports of AP and PB to be
identical, almost independently of the actual values of the entries. This allows to match weighted
graphs where the weights may follow completely different probability distributions. This is commonly the case when dealing with multimodal data: when a network is measured using significantly
different modalities, one expects the underlying connections to be the same but no relation can be
assumed between the actual weights of these connections. This is even the case for example for
fMRI data when measured with different instruments. In what follows, we evaluate the performance
of the proposed method in two examples of multimodal graph matching.
6
0
7
Matching error
Matching error
8
5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
6
5
4
3
2
1
5 10 15 20 25 30 35 40 45 50
Noise
0
5
10 15 20 25 30 35 40 45 50
Noise
(a) Electrical connection graph
(b) Chemical connection graph
Figure 2: Matching error for the C. elegans connectome, averaged over 50 runs. In solid black our proposed
GLAG algorithm, in long-dashed blue the PATH algorithm, and in short-dashed red the FAQ method. Note that
in the chemical connection graph, the matching error of our algorithm is zero until noise levels of ? 50.
We first generate an auxiliary binary random graph Ab and a permuted version Bb = PTo Ab Po .
Then, we assign weights to the graphs according to distributions pA and pB (that will be specified
for each experiment), thus obtaining the weighted graphs A and B. We then add noise consisting
of spurious weighted edges following the same distribution as the original graphs (i.e., pA for A
and pB for B). Finally, we run all four graph matching methods to recover the permutation. The
matching error is measured in the unweighted graphs as ||Ab ? PBb PT ||F . Note that while this
metric might not be appropriate for the optimization stage when considering multimodal data, it
is appropriate for the actual error evaluation, measuring mismatches. Comparing with the original
permutation matrix may not be very informative since there is no guarantee that the matrix is unique,
even for the original noise-free data.
Figures 3(a) and 3(b) show the comparison when the weights in both graphs are Gaussian distributed,
but with different means and variances. Figures 3(c) and 3(d) show the performances when the
weights of A are Gaussian distributed, and the ones of B follow a uniform distribution. See captions
for details. These results confirm the intuition described above, showing that our method is more
suitable for multimodal graphs, specially in the low range of noise.
35
30
30
Matching error
Matching error
25
25
20
15
10
20
15
10
5
5
0
5
10 15 20 25 30 35 40 45
Noise
0
30
25
25
20
20
15
10
5
0
5
10 15 20 25 30 35 40 45
Noise
(b) Scale-free graphs
Matching error
Matching error
(a) Erd?os-R?enyi graphs
5
15
10
5
0
10 15 20 25 30 35 40 45
Noise
5
10 15 20 25 30 35 40 45
Noise
(c) Erd?os-R?enyi graphs
(d) Scale-free graphs
Figure 3: Matching error for multimodal graphs with p = 100 nodes. In (a) and (b), weights in A are
N (1, 0.4) and weights in B are N (4, 1). In (c) and (d), weights in A are N (1, 0.4) and weights in B are
uniform in [1, 2]. In solid black our proposed GLAG algorithm, in long-dashed blue the PATH algorithm, in
short-dashed red the FAQ method, and in dotted black the QCP.
7
5.4 Collaborative inference
In this last experiment, we illustrate the application of the permuted collaborative graph inference
presented in Section 4 with real resting-state fMRI data, publicly available (Nooner, 2012). We consider here test-retest studies, that is, the same subject undergoing resting-state fMRI in two different
sessions separated by a break. Each session consists of almost 10 minutes of data, acquired with
a sampling period of 0.645s, producing about 900 samples per study. The CC200 atlas (Craddock
et al., 2012) was used to extract the time-series for the ? 200 regions of interest (ROIs), resulting in
two data matrices XA , XB ? R900?200 , corresponding to test and retest respectively.
To illustrate the potential of the proposed framework, we show that using only part of the data in
XA and part of the data in a permuted version of XB , we are able to infer a connectivity matrix
almost as accurately as using the whole data. Working with permuted data is very important in this
application in order to handle possible miss-alignments to the atlas.
Since there is no ground truth for the connectivity, and as mentioned before the collaborative setting
(7) has already been proven successful, we take as ground truth the result of the collaborative inference using the empirical covariance matrices of XA and XB , denoted by SA and SB . The result of
B
this collaborative inference procedure are the two inverse covariance matrices ?A
GT and ?GT . In
short, the gold standard built for this experiment are found by solving (obtained with the entire data)
X
B
?A
min
tr(SA ?A ) ? log det ?A + tr(SB ?B ) ? log det ?B + ?
ij , ?ij ||2 .
?A 0,?B 0
i,j
the first 550 samples of XB , which correspond
be the first 550 samples of X ,
Now,
B
to a little less than 6 minutes of study. We compute the empirical covariance matrices SA
H and SH
B
? = PT SB Po . With these two
of these data matrices, and we artificially permute the second one: S
A
let XA
H
and XB
H
H
B
o
H
?
matrices SA
H and SH we run the algorithm described in Section 4, which alternately computes the
B
inverse covariance matrices ?A
H and ?H and the matching P between them.
We compare this approach against the computation of the inverse covariance matrix using only one
A
B
B
of the studies. Let ?A
s and ?s be the results of the graphical Lasso (6) using S and S :
X
K
?K
|?ij | , for K = {A, B}.
s = argmin tr(S ?) ? log det ? + ?
?0
i,j
A
A
This experiment is repeated for 5 subjects in the database. The errors ||?A
GT ? ?s ||F and ||?GT ?
A
B
?H ||F are shown in Figure 4. The errors for ? are very similar. Using less than 6 minutes of each
study, with the variables not pre-aligned, the permuted collaborative inference procedure proposed
in Section 4 outperforms the classical graphical Lasso using the full 10 minutes of study.
Error (?10?3 )
6
Figure 4: Inverse covariance matrix
estimation for fMRI data. In blue,
error using one complete 10 minutes
A
study: ||?A
GT ? ?s ||F . In red, erA
A
ror ||?GT ? ?H ||F with collaborative inference using about 6 minutes
of each study, but solving for the unknown node permutations at the same
time.
5
4
3
2
1
0
6
1
2
3
Subject
4
5
Conclusions
We have presented a new formulation for the graph matching problem, and proposed an optimization
algorithm for minimizing the corresponding cost function. The reported results show its suitability
for the graph matching problem of weighted graphs, outperforming previous state-of-the-art methods, both in synthetic and real graphs. Since in the problem formulation the weights of the graphs
are not compared explicitly, the method can deal with multimodal data, outperforming the other
compared methods. In addition, the proposed formulation naturally fits into the pre-alignment-free
collaborative network inference framework, where the permutation is estimated together with the
underlying common network, with promising preliminary results in applications with real data.
Acknowledgements: Work partially supported by ONR, NGA, NSF, ARO, AFOSR, and ANII.
8
References
Almohamad, H. and Duffuaa, S. A linear programming approach for the weighted graph matching problem. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 15(5):522?525,
1993.
Barab?asi, A. and Albert, R. Emergence of scaling in random networks. Science, 286(5439):509?512,
1999.
Bertsekas, D. and Tsitsiklis, J. Parallel and Distributed Computation: Numerical Methods. Prentice
Hall, 1989.
Chiquet, J., Grandvalet, Y., and Ambroise, C. Inferring multiple graphical structures. Statistics and
Computing, 21(4):537?553, 2011.
Conte, D., Foggia, P., Sansone, C., and Vento, M. Thirty years of graph matching in pattern recognition. International Journal of Pattern Recognition and Artificial Intelligence, 18(03):265?298,
2004.
Craddock, R.C., James, G.A., Holtzheimer, P.E., Hu, X.P., and Mayberg, H.S. A whole brain fMRI
atlas generated via spatially constrained spectral clustering. Human Brain Mapping, 33(8):1914?
1928, 2012.
Erd?os, P. and R?enyi, A. On random graphs, I. Publicationes Mathematicae, 6:290?297, 1959.
Fiori, Marcelo, Mus?e, Pablo, and Sapiro, Guillermo. Topology constraints in graphical models. In
Advances in Neural Information Processing Systems 25, pp. 800?808, 2012.
Fiori, Marcelo, Mus?e, Pablo, Hariri, Ahamd, and Sapiro, Guillermo. Multimodal graphical models
via group lasso. Signal Processing with Adaptive Sparse Structured Representations, 2013.
Kuhn, H. W. The Hungarian method for the assignment problem. Naval Research Logistic Quarterly, 2:83?97, 1955.
Lin, Z., Liu, R., and Su, Z. Linearized alternating direction method with adaptive penalty for lowrank representation. In Advances in Neural Information Processing Systems 24, pp. 612?620.
2011.
Loh, P. and Wainwright, M. Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. In Advances in Neural Information Processing Systems 25, pp.
2096?2104. 2012.
Newman, M. Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA, 2010.
Nooner, K. et al. The NKI-Rockland sample: A model for accelerating the pace of discovery science
in psychiatry. Frontiers in Neuroscience, 6(152), 2012.
Seshadhri, C., Kolda, T.G., and Pinar, A. Community structure and scale-free collections of Erd?osR?enyi graphs. Physical Review E, 85(5):056109, 2012.
Umeyama, S. An eigendecomposition approach to weighted graph matching problems. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 10(5):695?703, 1988.
Varoquaux, G., Gramfort, A., Poline, J.B., and T., Bertrand. Brain covariance selection: better individual functional connectivity models using population prior. In Advances in Neural Information
Processing Systems 23, pp. 2334?2342, 2010.
Varshney, L., Chen, B., Paniagua, E., Hall, D., and Chklovskii, D. Structural properties of the
caenorhabditis elegans neuronal network. PLoS Computational Biology, 7(2):e1001066, 2011.
Vogelstein, J.T., Conroy, J.M., Podrazik, L.J., Kratzer, S.G., Harley, E.T., Fishkind, D.E., Vogelstein,
R.J., and Priebe, C.E. Fast approximate quadratic programming for large (brain) graph matching.
arXiv:1112.5507, 2012.
Yuan, M. and Lin, Y. Model selection and estimation in regression with grouped variables. Journal
of the Royal Statistical Society: Series B, 68(1):49?67, 2006.
Yuan, M. and Lin, Y. Model selection and estimation in the Gaussian graphical model. Biometrika,
94(1):19?35, February 2007.
Zaslavskiy, M., Bach, F., and Vert, J.P. A path following algorithm for the graph matching problem.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(12):2227?2242, 2009.
9
| 4925 |@word version:10 polynomial:2 norm:5 hu:1 linearized:4 covariance:12 tr:8 solid:3 versatile:1 harder:1 liu:1 series:2 outperforms:3 craddock:2 comparing:1 yet:1 written:1 realistic:1 numerical:1 informative:1 atlas:3 update:4 newest:1 intelligence:4 fewer:1 nervous:1 short:5 math:1 bijection:1 node:10 firstly:1 constructed:1 yuan:5 consists:6 doubly:4 combine:1 acquired:1 indeed:1 behavior:1 p1:1 nor:1 brain:5 inspired:2 bertrand:1 actual:4 little:1 solver:1 considering:1 becomes:1 estimating:2 underlying:4 moreover:3 what:2 pto:2 argmin:1 minimizes:2 developed:1 finding:4 guarantee:3 sapiro:4 concave:1 seshadhri:2 exactly:1 biometrika:1 producing:1 bertsekas:2 before:2 tends:2 limit:1 era:1 ak:3 oxford:1 meet:1 path:7 ap:14 black:5 might:1 studied:1 challenging:2 range:4 averaged:2 uy:3 unique:3 thirty:1 mfiori:2 practice:1 block:1 procedure:3 area:1 empirical:3 asi:2 thought:1 significantly:1 matching:60 projection:3 pre:3 vert:1 suggest:1 fing:3 ga:2 onto:1 operator:1 pbb:1 prentice:1 context:1 selection:3 www:1 equivalent:1 lagrangian:3 resembling:1 attention:1 independently:1 convex:12 decomposable:1 estimator:4 population:1 handle:1 ambroise:1 coordinate:1 limiting:1 kolda:1 pt:8 exact:1 duke:6 caption:1 programming:2 hypothesis:1 pa:3 element:1 recognition:3 expensive:1 zaslavskiy:5 database:1 observed:1 role:1 subproblem:5 bottom:2 solved:4 electrical:4 region:1 plo:1 mentioned:2 intuition:1 anii:1 mu:3 complexity:2 ideally:1 personal:1 solving:8 ror:1 tpo:1 completely:1 multimodal:17 joint:4 po:8 separated:1 enyi:8 fast:1 artificial:2 newman:2 whose:1 solve:4 relax:1 statistic:1 jointly:1 itself:2 noisy:2 final:2 emergence:1 sequence:1 differentiable:2 advantage:1 propose:3 aro:1 coming:1 product:1 caenorhabditis:2 aligned:4 loop:1 umeyama:2 pthe:1 rockland:1 gold:1 description:1 frobenius:2 convergence:1 converges:1 depending:1 illustrate:2 ac:1 measured:4 a2k:2 ij:27 lowrank:1 ahamd:1 sa:6 p2:4 solves:2 auxiliary:1 hungarian:2 come:1 kuhn:2 direction:2 hull:2 stochastic:4 human:1 adjacency:10 require:1 assign:1 ao:6 suitability:1 preliminary:1 varoquaux:2 biological:1 secondly:1 ublica:2 frontier:1 hold:1 considered:1 ground:3 normal:1 roi:1 vince:1 hall:2 mapping:1 claim:1 a2:1 estimation:6 combinatorial:1 grouped:1 weighted:9 minimization:4 gaussian:4 fiori:5 focus:2 naval:1 consistently:1 likelihood:2 contrast:1 psychiatry:1 summarizing:1 inference:15 minimizers:1 stopping:1 sb:5 entire:1 integrated:1 bt:1 spurious:2 relation:1 arg:5 denoted:1 resonance:1 art:4 special:1 constrained:2 initialize:1 gramfort:1 field:2 equal:2 never:1 sampling:1 identical:1 biology:1 fmri:9 minimized:2 np:3 others:1 few:2 b2k:2 individual:1 consisting:2 ab:3 harley:1 interest:1 evaluation:2 alignment:5 sh:2 xb:7 edge:10 tree:1 re:1 chiquet:2 instance:1 column:2 modeling:1 soft:1 measuring:2 assignment:3 cost:2 vertex:2 addressing:2 entry:11 expects:1 uniform:2 successful:1 characterize:1 reported:1 varies:1 sv:1 synthetic:6 fundamental:2 international:1 universidad:2 connectome:2 together:2 analogously:1 connectivity:4 satisfied:1 manage:1 containing:1 choose:1 possibly:2 a1t:1 uruguay:2 potential:1 de:2 coding:2 b2:1 coefficient:4 inc:1 explicitly:1 performed:1 try:1 view:1 lot:1 closed:1 analyze:1 observing:1 reached:1 red:4 recover:2 break:1 capability:1 parallel:1 mutation:1 collaborative:17 contribution:1 marcelo:3 publicly:4 ass:1 minimize:2 variance:1 efficiently:2 correspond:2 accurately:1 produced:1 classified:1 synapsis:1 mathematicae:1 against:2 pp:4 james:1 naturally:5 proof:2 elegance:1 unsolved:1 knowledge:1 ut:8 organized:1 back:1 retest:2 follow:3 planar:1 erd:8 formulation:16 though:1 xa:6 biomedical:1 stage:1 until:1 hand:1 working:1 su:1 o:6 lack:1 logistic:1 scientific:1 usa:1 true:2 multiplier:2 equality:1 regularization:1 chemical:5 alternating:2 spatially:1 iteratively:1 deal:3 self:1 criterion:1 generalized:1 evident:1 complete:3 apk:3 l1:1 image:1 pbp:1 common:5 permuted:8 functional:2 mt:1 physical:1 discussed:1 m1:1 resting:2 session:2 sansone:1 moving:1 stable:1 longer:1 gt:6 add:2 closest:2 multivariate:1 recent:2 scenario:2 certain:1 rep:2 binary:2 outperforming:2 vt:5 onr:1 joshua:1 preserving:1 minimum:3 seen:1 relaxed:3 additional:1 period:1 signal:1 vogelstein:6 dashed:6 full:1 multiple:1 infer:2 smooth:1 match:3 characterized:1 bach:1 long:4 lin:7 promotes:3 barab:2 variant:2 regression:1 ae:1 vision:1 metric:2 albert:2 iteration:2 represent:1 faq:6 arxiv:1 mayberg:1 addition:2 want:2 argminq:1 chklovskii:1 addressed:1 completes:1 leaving:1 modality:3 rest:1 specially:3 subject:3 undirected:2 elegans:4 call:1 structural:1 affect:1 fit:3 lasso:11 perfectly:1 topology:1 inner:1 idea:1 det:8 whether:1 motivated:1 isomorphism:4 gb:2 accelerating:1 penalty:3 loh:2 york:1 clear:1 maybe:1 extensively:1 simplest:1 generate:1 exist:1 nsf:1 dotted:2 estimated:1 neuroscience:1 per:4 pace:1 blue:4 diverse:1 write:1 discrete:1 affected:1 pinar:1 group:19 four:2 nevertheless:1 pb:15 changing:1 publicationes:1 imaging:1 graph:106 relaxation:5 subgradient:1 sum:4 allp:1 year:2 run:4 inverse:8 prob:1 nga:1 place:1 almost:3 frobenious:1 ob:2 scaling:1 correspondence:4 fold:1 quadratic:3 nonnegative:1 constraint:2 precisely:1 conte:4 generates:1 osr:2 min:12 optimality:1 structured:1 according:2 mismatching:1 remain:1 making:1 outlier:1 computationally:2 remains:2 sprechmann:2 know:1 instrument:1 end:1 available:4 promoting:1 quarterly:1 away:1 spectral:3 appropriate:3 magnetic:1 apt:3 rp:2 original:7 top:2 clustering:1 graphical:9 exploit:1 february:1 classical:1 society:1 added:1 already:1 dependence:1 usual:2 traditional:1 gradient:2 distance:1 mapped:1 oa:2 code:2 minimizing:4 nc:3 susceptible:1 potentially:1 statement:1 gk:1 stated:2 priebe:1 implementation:1 unknown:1 observation:1 neuron:3 datasets:1 descent:2 enjoyable:1 communication:1 somatic:1 arbitrary:1 community:2 pablo:5 bk:3 cast:1 pair:1 specified:1 connection:9 conroy:1 alternately:1 address:1 able:1 below:1 pattern:6 mismatch:1 sparsity:2 challenge:1 built:1 including:2 royal:1 video:1 wainwright:2 suitable:2 nki:1 representing:1 numerous:1 extract:1 xq:1 text:1 review:2 prior:3 l2:2 geometric:1 acknowledgement:1 multiplication:1 determining:1 discovery:1 law:1 afosr:1 permutation:18 interesting:1 limitation:1 generation:1 proven:2 penalization:1 eigendecomposition:1 degree:3 minp:2 thresholding:2 grandvalet:1 share:2 row:5 course:1 guillermo:4 poline:1 supported:1 last:5 free:12 keeping:2 tsitsiklis:2 wide:2 sparse:7 distributed:4 regard:1 overcome:1 valid:2 world:1 unweighted:3 computes:1 commonly:1 adaptive:2 projected:1 collection:1 far:1 transaction:3 bb:1 approximate:3 varshney:2 dealing:1 confirm:1 active:6 conclude:1 assumed:1 search:1 iterative:1 promising:1 nature:1 learn:1 robust:4 decoupling:1 obtaining:1 permute:1 necessarily:1 artificially:3 pk:5 main:1 whole:2 noise:24 repeated:1 augmented:3 neuronal:1 referred:2 representative:1 pmuse:1 vento:1 ny:1 inferring:1 rk:1 minute:6 hgk:1 showing:1 undergoing:1 exists:1 consist:2 intractable:1 incorporating:1 gained:1 magnitude:3 chen:1 durham:3 simply:2 lagrange:1 contained:1 partially:1 scalar:1 bo:5 mij:2 corresponds:3 truth:3 satisfies:1 relies:1 conditional:1 formulated:1 admm:1 feasible:1 determined:1 pbo:1 miss:1 lemma:2 isomorphic:6 experimental:5 la:2 exception:1 support:3 stressed:1 dissimilar:1 evaluate:1 tested:1 |
4,338 | 4,926 | Deep Fisher Networks
for Large-Scale Image Classification
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
Visual Geometry Group, University of Oxford
{karen,vedaldi,az}@robots.ox.ac.uk
Abstract
As massively parallel computations have become broadly available with modern
GPUs, deep architectures trained on very large datasets have risen in popularity. Discriminatively trained convolutional neural networks, in particular, were
recently shown to yield state-of-the-art performance in challenging image classification benchmarks such as ImageNet. However, elements of these architectures
are similar to standard hand-crafted representations used in computer vision. In
this paper, we explore the extent of this analogy, proposing a version of the stateof-the-art Fisher vector image encoding that can be stacked in multiple layers.
This architecture significantly improves on standard Fisher vectors, and obtains
competitive results with deep convolutional networks at a smaller computational
learning cost. Our hybrid architecture allows us to assess how the performance of
a conventional hand-crafted image classification pipeline changes with increased
depth. We also show that convolutional networks and Fisher vector encodings are
complementary in the sense that their combination further improves the accuracy.
1
Introduction
Discriminatively trained deep convolutional neural networks (CNN) [18] have recently achieved impressive state of the art results over a number of areas, including, in particular, the visual recognition
of categories in the ImageNet Large-Scale Visual Recognition Challenge [4]. This success is built
on many years of tuning and incorporating ideas into CNNs in order to improve their performance.
Many of the key ideas in CNN have now been absorbed into features proposed in the computer vision
literature ? some have been discovered independently and others have been overtly borrowed. For
example: the importance of whitening [11]; max pooling and sparse coding [26, 33]; non-linearity
and normalization [20]. Indeed, several standard features and pipelines in computer vision, such as
SIFT [19] and a spatial pyramid on Bag of visual Words (BoW) [16] can be seen as corresponding
to layers of a standard CNN. However, image classification pipelines used in the computer vision
literature are still generally quite shallow: either a global feature vector is computed over an image, and used directly for classification; or, in a few cases, a two layer hierarchy is used, where the
outputs of a number of classifiers form the global feature vector for the image (e.g. attributes and
classemes [15, 30]).
The question we address in this paper is whether it is possible to improve the performance of off-theshelf computer vision features by organising them into a deeper architecture. To this end we make
the following contributions: (i) we introduce a Fisher Vector Layer, which is a generalization of the
standard FV to a layer architecture suitable for stacking; (ii) we demonstrate that by discriminatively
training several such layers and stacking them into a Fisher Vector Network, an accuracy competitive
with the deep CNN can be achieved, whilst staying in the realms of conventional SIFT and colour
features and FV encodings; and (iii) we show that class posteriors, computed by the deep CNN and
FV, are complementary and can be combined to significantly improve the accuracy.
The rest of the paper is organised as follows. After a discussion of the related work, we begin
with a brief description of the conventional FV encoding [20] (Sect. 2). We then show how this
1
representation can be modified to be used as a layer in a deeper architecture (Sect. 3) and how
the latter can be discriminatively learnt to yield a deep Fisher network (Sect. 4). After discussing
important details of the implementation (Sect. 5), we evaluate our architecture on the ImageNet
image classification benchmark (Sect. 6).
Related work. There is a vast literature on large-scale image classification, which we briefly review
here. One widely used approach is to extract local features such as SIFT [19] densely from each image, aggregate and encode them as high-dimensional vectors, and feed the latter to a classifier, e.g.
an SVM. There exists a large variety of different encodings that can be used for this purpose, including the BoW [9, 29] encoding, sparse coding [33], and the FV encoding [20]. Since FV was shown
to outperform other encodings [6] and achieve very good performance on various image recognition
benchmarks [21, 28], we use it as the basis of our framework. We note that other recently proposed
encodings (e.g. [5]) can be readily employed in the place of FV. Most encodings are designed to disregard the spatial location of features in order to be invariant to image transformations; in practice,
however, retaining weak spatial information yields an improved classification performance. This
can be incorporated by dividing the image into regions, encoding each of them individually, and
stacking the result in a composite higher-dimensional code, known as a spatial pyramid [16]. The
alternative, which does not increase the encoding dimensionality, is to augment the local features
with their spatial coordinates [24].
Another vast family of image classification techniques is based on Deep Neural Networks (DNN),
which are inspired by the layered structure of the visual cortex in mammals [22]. DNNs can be
trained greedily, in a layer-by-layer manner, as in Restricted Boltzmann Machines [12] and (sparse)
auto-encoders [3, 17], or by learning all layers simultaneously, which is relatively efficient if the
layers are convolutional [18]. In particular, the advent of massively-parallel GPUs has recently made
it possible to train deep convolutional networks on a large scale with excellent performance [7, 14].
It was also shown that techniques such as training and test data augmentation, as well as averaging
the outputs of independently trained DNNs, can significantly improve the accuracy.
There have been attempts to bridge these two families, exploring the trade-offs between network
depth and width, as well as the complexity of the layers. For instance, dense feature encoding using
the bag of visual words was considered as a single layer of a deep network in [1, 8, 32].
2
Fisher vector encoding for image classification
The Fisher vector encoding ? of a set of features {xp } (e.g. densely computed SIFT features) is
based on fitting a parametric generative model, e.g. the Gaussian Mixture Model (GMM), to the
features, and then encoding the derivatives of the log-likelihood of the model with respect to its
parameters [13]. In the particular case of GMMs with diagonal covariances, used here, this leads to
the representation which captures the average first and second order differences between the features
and each of the GMM centres [20]:
N
N
X
1 X
xp ? ?k
1
(xp ? ?k )2
(1)
(2)
?k = ?
?k (xp )
, ?k = ?
?k (xp )
?
1
(1)
N ?k p=1
?k
?k2
N 2?k p=1
Here, {?k , ?k , ?k }k are the mixture weights, means, and diagonal covariances of the GMM, which is
computed on the training set and used for the description of all images; ?k (xp ) is the soft assignment
weighth of the p-th feature xp to the
i k-th Gaussian. An FV is obtained by stacking the differences:
(1)
(2)
(1)
(2)
? = ?1 , ?1 , . . . , ?K , ?K . The encoding describes how the distribution of features of a
particular image differs from the distribution fitted to the features of all training images. To make
the features amenable to the FV description based on the diagonal-covariance GMM, they are first
decorrelated by PCA.
The FV dimensionality is 2Kd, where K is the codebook size (the number of Gaussians in the
GMM), and d is the dimensionality of the encoded feature vector. For instance, FV encoding of
a SIFT feature (d = 128) using a small GMM codebook (K = 256) is 65.5K-dimensional. This
means that high-dimensional feature encodings can be quickly computed using small codebooks.
Using the same codebook size, BoW and sparse coding are only K-dimensional and less discriminative, as demonstrated in [6]. From another point of view, given the desired encoding dimensionality,
these methods would require 2d-times larger codebooks than needed for FV, which would lead to
impractical computation times.
2
One vs. rest
linear SVMs
classifier layer
SSR & L2 norm.
2-nd Fisher layer
(global pooling)
FV encoder
L2 norm. & PCA
Spatial stacking
FV encoder
Dense feature
extraction
SSR & L2 norm.
One vs rest
linear SVMs
SSR & L2 norm.
FV encoder
1-st Fisher layer
(with optional global
pooling branched out)
0-th layer
SIFT, raw patches, ?
Dense feature
extraction
SIFT, raw patches, ?
input image
Figure 1: Left: Fisher network (Sect. 4) with two Fisher layers. Right: conventional pipeline
using a shallow Fisher vector encoding. As shown in Sect. 6, making the conventional pipeline
slightly deeper by injecting a single Fisher layer substantially improves the classification accuracy.
As can be seen from (1), the (unnormalised) FV encoding is additive with respect to image features,
i.e. the encoding of an image is an average of the individual encodings of its features. Following [20],
FV performance is further improved by passing it through Signed Square-Rooting (SSR) and L2
normalisation. Finally, the high-dimensional FV is usually coupled with a one-vs-rest linear SVM
classifier, and together they form a conventional image classification pipeline [21] (see Fig. 1), which
serves as a baseline for our classification framework.
3
Fisher layer
The conventional FV representation of an image (Sect. 2), effectively encodes each local feature
(e.g. SIFT) into a high-dimensional representation, and then aggregates these encodings into a single
vector by global sum-pooling over the whole image (followed by normalisation). This means that
the representation describes the image in terms of the local patch features, and can not capture more
complex image structures. Deep neural networks are able to model the feature hierarchies by passing
an output of one feature computation layer as the input to the next one. We adopt a similar approach
here, and devise a feed-forward feature encoding layer (which we term a Fisher layer), which is
based on off-the-shelf Fisher vector encoding. The layers can then be stacked into a deep network,
which we call a Fisher network.
The architecture of the l-th Fisher layer is depicted in Fig. 2. On the input, it receives dl -dimensional
features (dl ? 102 ), densely computed over multiple scales on a regular image grid. The features are
assumed to be decorrelated using PCA. The layer then performs feed-forward feature transformation
in three sub-layers.
The first one computes semi-local FV encodings by pooling the input features not from the whole
image, but from a dense set of semi-local regions. The resulting FVs form a new set of densely
sampled features that are more discriminative than the input ones and less local, as they integrate
information from larger image areas. The FV encoder (Sect. 2) uses a layer-specific GMM with Kl
components, so the dimensionality of each FV is 2Kl dl , which, considering that FVs are computed
densely, might be too large for practical applications. Therefore, we decrease FV dimensionality
by projection onto hl -dimensional subspace using a discriminatively trained linear projection Wl ?
Rhl ?2Kl dl . In practice, this is carried out using an efficient variant of the FV encoder (Sect. 5). In
the second sub-layer, the spatially adjacent features are stacked in a 2 ? 2 window, which produces
4hl -dimensional dense feature representation. Finally, the features are L2 -normalised and PCAprojected to dl+1 -dimensional subspace using the linear projection Ul ? Rdl+1 ?4hl , and passed as
the input to the (l + 1)-th layer. Each sub-layer is explained in more detail below.
3
dl
Compressed
semi-local
local Fisher
vector encoding
mixture of
Kl Gaussians
hl
projection Wl
from 2Kldl to hl
Spatial
stacking
(2?2)
4hl
L2 norm.
& PCA
projection Ul
from 4hl to dl+1
dl+1
ql
2ql
4hl
Figure 2: The architecture of a single Fisher layer. Left: the arrows illustrate the data flow through
the layer; the dimensionality of densely computed features is shown next to the arrows. Right: spatial
pooling (the blue squares) and stacking (the red square) in sub-layers 1 and 2 respectively.
Fisher vector pooling (sub-layer 1). The key idea behind the first sub-layer is to aggregate the FVs
of individual features over a family of semi-local spatial neighbourhoods. These neighbourhoods
are overlapping square regions of size ql ? ql , sampled every ?l pixels (see Fig. 2); compared to the
regions used in global or spatial pyramid pooling [20], these are smaller and sampled much more
densely. As a result, instead of a single FV, describing the whole image, the image is represented by
a large number of densely computed semi-local FVs, each of which describes a spatially adjacent set
of local features, computed by the previous layer. Thus, the new feature representation can capture
more complex image statistics with larger spatial support. We note that due to additivity, computing the FV of a spatial neighbourhood corresponds to the sum-pooling over the neighbourhood, a
stage widely used in DNNs. The high dimensionality of Fisher vectors, however, brings up the
computational complexity issue, as storing and processing thousands of dense FVs per image (each
of which is 2Kl dl -dimensional) is prohibitive at large scale. We tackle this problem by employing
discriminative dimensionality reduction for high-dimensional FVs, which makes the layer learning
procedure supervised. The dimensionality reduction is carried out using a linear projection Wl onto
an hl -dimensional subspace. As will be shown in Sect. 5, compressed FVs can be computed very
efficiently without the need to compute the full-dimensional FVs first, and then project them down.
A similar approach (passing the output of a feature encoder to another encoder) has been previously
employed by [1, 8, 32], but in their case they used bag-of-words or sparse coding representations. As
noted in [8], such encodings require large codebooks to produce discriminative feature representations. This, in turn, makes these approaches hardly applicable to the datasets of ImageNet scale [4].
As explained in Sect. 2, FV encoders do not require large codebooks, and by employing supervised
dimensionality reduction, we can preserve the discriminative ability of FV even after the projection
onto a low-dimensional space, similarly to [10].
Spatial stacking (sub-layer 2). After the dimensionality-reduced FV pooling (Sect. 3), an image is
represented as a spatially dense set of low-dimensional multi-scale discriminative features. It should
be noted that local sum-pooling, while making the representation invariant to small translations, is
agnostic to the relative location of aggregated features. To capture the spatial structure within each
feature?s neighbourhood, we incorporate the stacking sub-layer, which concatenates the spatially
adjacent features in a 2 ? 2 window (Fig. 2). This step is similar to 4 ? 4 stacking employed in SIFT.
Normalisation and PCA projection (sub-layer 3). After stacking, the features are L2 -normalised,
which improves their invariance properties. This procedure is closely related to Local Contrast
Normalisation, widely used in DNNs. Finally, before passing the features to the FV encoder of the
next layer, PCA dimensionality reduction is carried out, which serves two purposes: (i) features
are decorrelated so that they can be modelled using diagonal-covariance GMMs of the next layer;
(ii) dimensionality is reduced from 4hl to dl+1 to keep the image representation compact and the
computational complexity limited.
Multi-scale computation. In practice, the Fisher layer computation is repeated at multiple scales by
changing the pooling window size ql (the PCA projection in sub-layer 3 is the same for all scales).
This allows a single layer to capture multi-scale statistics, which is different from typical DNN
architectures, which use a single pooling window size per layer. The resulting dense multi-scale
features, computed by the layer, form the input of the next layer (similarly to the dense multi-scale
SIFT features). In Sect. 6 we show that a multi-scale Fisher layer indeed brings an improvement,
compared to a fixed pooling window size.
4
4
Fisher network
Our image classification pipeline, which we coin Fisher network (shown in Fig. 1) is constructed by
stacking several (at least one) Fisher layers (Sect. 3) on top of dense features, such as SIFT or raw
image patches. The penultimate layer, which computes a single-vector image representation, is the
special case of the Fisher layer, where sum-pooling is only performed globally over the whole image.
We call this layer the global Fisher layer, and it effectively computes a full-dimensional normalised
Fisher vector encoding (the dimensionality reduction stage is omitted since the computed FV is
directly used for classification). The final layer is an off-the-shelf ensemble of one-vs-rest binary
linear SVMs. As can be seen, a Fisher network generalises the standard FV pipeline of [20], as the
latter corresponds to the network with a single global Fisher layer.
Multi-layer image descriptor. Each subsequent Fisher layer is designed to capture more complex,
higher-level image statistics, but the competitive performance of shallow FV-based frameworks [21]
suggests that low-level SIFT features are already discriminative enough to distinguish between a
number of image classes. To fully exploit the hierarchy of Fisher layers, we branch out a globally
pooled, normalised FV from each of the Fisher layers, not just the last one. These image representations are then concatenated to produce a rich, multi-layer image descriptor. A similar approach has
previously been applied to convolutional networks in [25].
4.1
Learning
The Fisher network is trained in a supervised manner, since each Fisher layer (apart from the global
layer) depends on discriminative dimensionality reduction. The network is trained greedily, layer by
layer. Here we discuss how the (non-global) Fisher layer can be efficiently trained in the large-scale
scenario, and introduce two options for the projection learning objective.
Projection learning proxy. As explained in Sect. 3, we need to learn a discriminative projection
W to significantly reduce the dimensionality of the densely-computed semi-local FVs. At the same
time, the only annotation available for discriminative learning in our case is the class label of the
whole image. We exploit this information by requiring that projected semi-local FVs are good
predictors of the image class. Taking into account that (i) it may be unreasonable to require all local
feature occurrences to predict the object class (the support of some features may not even cover the
object), and (ii) there are too many features to use all of them in learning (? 104 semi-local FVs for
each of the ? 106 training images), we optimize the average class prediction of all the features in a
layer, rather than the prediction of individual feature occurrences.
In particular, we construct a learning proxy by computing the average ? of all unnormalised, unproPS
jected semi-local FVs ?s of an image, ? = S1 s=1 ?s , and defining the learning constraints on ?
PS
using the image label. Considering that W ? = S1 s=1 W ?s , the projection W , learnt for ?, is also
applicable to individual semi-local FVs ?s . The advantages of the proxy are that the image-level
class annotation can now be utilised, and during projection learning we only need to store a single
vector ? per image. In the sequel, we define two options for the projection learning objective, which
are then compared in Sect. 6.
Bi-convex max-margin projection learning. One approach to discriminative dimensionality reduction learning consists in finding the projection onto a subspace where the image classes are
as linearly separable as possible [10, 31]. This corresponds to the bilinear class scoring function:
vcT W ?, where W is the linear projection which we seek to optimise and vc is the linear model (e.g.
an SVM) of the class c in the projected space. The max-margin optimisation problem for W and the
ensemble {vc } takes the following form:
X X
i
c0 6=c(i)
max
h
vc0 ? vc(i)
T
i ?X
?
kvc k22 + kW k2F ,
W ?i + 1, 0 +
2 c
2
(2)
where ci is the ground-truth class of an image i, ? and ? are the regularisation constants. The
learning objective is bi-convex in W and vc , and a local optimum can be found by alternation
between the convex problems for W and {vc }, both of which can be solved in primal using a
stochastic sub-gradient method [27]. We initialise the alternation by setting W to the PCA-whitening
matrix W0 . Once the optimisation has converged, the classifiers vc are discarded, and we keep the
projection W .
5
Projection onto the space of classifier scores. Another dimensionality reduction technique, which
we consider in this work, is to train one-vs-rest SVM classifiers {uc }C
c=1 on the full-dimensional
FVs ?, and then use the C-dimensional vector of SVM outputs as the compressed representation
of ?. This corresponds to setting the c-th row of the projection matrix W to the SVM model uc .
This approach is closely related to attribute-based representations and classemes [15, 30], but in our
case we do not use any additional data annotated with a different set of (attribute) classes to train
the models; instead, the C = 1000 classifiers trained directly on the ILSVRC dataset are used. If
a specific target dimensionality is required, PCA dimensionality reduction can be further applied to
the classifier scores [10], but in our case we applied PCA after spatial stacking (Sect. 3).
The advantage of using SVM models for dimensionality reduction is, mostly, computational. As we
will show in Sect. 6, both formulations exhibit a similar level of performance, but training C onevs-rest classifiers is much faster than performing alternation between SVM learning and projection
learning in (2). The reason is that one-vs-rest SVM training can be easily parallelised, while projection learning is significantly slower even when using a parallel gradient descent implementation.
5
Implementation details
Efficient computation of hard-assignment Fisher vectors. In the original FV encoding formulation (1), each feature is soft-assigned to all K Gaussians of the GMM by computing the assignment
? N k (xp )
weights ?k (xp ) as the responsibilities of GMM component k for feature p: ?k (xp ) = P k?j N
,
j (xp )
j
where N k (xp ) is the likelihood of k-th Gaussian. To facilitate an efficient computation of a large
number of dense FVs per image, we introduce and utilise a fast variant of FV (which we term
hard-FV), which uses hard assignments of features to Gaussians, computed as
1 if k = arg maxj ?j N j (xp )
?k (xp ) =
(3)
0 otherwise
Hard-FVs are inherently sparse; this allows for
the fast computation of projected
FVs Wl ?. Indeed, it
PK P
(k,1) (1)
(k,2) (2)
is easy to show that Wl ? = k=1 p??k Wl
?k (p) + Wl
?k (p) , where ?k is the set
(k,1)
(k,2)
of input vectors hard-assigned to the GMM component k, and Wl
, Wl
are the sub-matrices
(1),(2)
of Wl which correspond to the 1st and 2nd order differences ?k
(p) between the feature xp
and the k-th GMM mean (1). This suggests the fast computation procedure: each dl -dimensional
input feature xp is first hard-assigned to a Gaussian k based on (3). Then, the corresponding dl -D
(1),(2)
(k,1)
(k,2)
differences ?k
(p) are computed and projected using small hl ?dl sub-matrices Wl
, Wl
,
which is fast. The algorithm avoids computing high-dimensional FVs, followed by the projection
using a large matrix Wl ? Rhl ?2Kl dl , which is prohibitive since the number of dense FVs is high.
Implementation. Our SIFT feature extraction follows that of [21]. Images are rescaled so that
the number of?pixels is 100K. Dense RootSIFT [2] is computed on 24 ? 24 patches over 5 scales
(scale factor 3 2) with a 3 pixel step. We also employ SIFT augmentation with the patch spatial
coordinates [24]. During training, high-dimensional FVs, computed by the 2nd Fisher layer, are
compressed using product quantisation [23]. The learning framework is implemented in Matlab,
speeded up with C++ MEX. The computation is carried out on CPU without the use of GPU. Training the Fisher network on top of SIFT descriptors on 1.2M images of ILSVRC-2010 [4] dataset
takes about one day on a 200-core cluster. Image classification time is ? 2s on a single core.
6
Evaluation
In this section, we evaluate the proposed Fisher network on the dataset, introduced for the ImageNet
Large Scale Visual Recognition Challenge (ILSVRC) 2010 [4]. It contains images of 1000 categories, with 1.2M images available for training, 50K for validation, and 150K for testing. Following
the standard evaluation protocol for the dataset, we report both top-1 and top-5 accuracy (%) computed on the test set. Sect. 6.1 evaluates variants of the Fisher network on a subset of ILSVRC to
identify the best one. Then, Sect. 6.2 evaluates the complete framework.
6.1
Fisher network variants
We begin with comparing the performance of the Fisher network under different settings. The
comparison is carried out on a subset of ILSVRC, which was obtained by random sampling of 200
6
Table 1: Evaluation of dimensionality reduction, stacking, and normalisation sub-layers on a
200 class subset of ILSVRC-2010. The following configuration of Fisher layers was used: d1 =
128, K1 = 256, q1 = 5, ?1 = 1, h1 = 200 (number of classes), d2 = 200 , K2 = 256. The baseline
performance of a shallow FV encoding is 57.03% and 78.9% (top-1 and top-5 accuracy).
dim-ty reduction
classifier scores
classifier scores
classifier scores
bi-convex
stacking
L2 norm-n
X
X
X
X
top-1
59.69
59.42
60.22
59.49
X
X
top-5
80.29
80.44
80.93
81.11
Table 2: Evaluation of multi-scale pooling and multi-layer image description on the subset of
ILSVRC-2010. The following configuration of Fisher layers was used: d1 = 128, K1 = 256,
h1 = 200, d2 = 200, K2 = 256. Both Fisher layers used spatial coordinate augmentation. The
baseline performance of a shallow FV encoding is 59.51% and 80.50% (top-1 and top-5 accuracy).
pooling window size q1
5
{5, 7, 9, 11}
{5, 7, 9, 11}
pooling stride ?1
1
2
2
multi-layer
X
top-1
61.56
62.16
63.79
top-5
82.21
82.43
83.73
classes out of 1000. To avoid over-fitting indirectly on the test set, comparisons in this section are
carried on the validation set. In our experiments, we used SIFT as the first layer of the network,
followed by two Fisher layers (the second one is global, as explained in Sect. 4).
Dimensionality reduction, stacking, and normalisation. Here we quantitatively assess the three
sub-layers of a Fisher layer (Sect. 3). We compare the two proposed dimensionality reduction learning schemes (bi-convex learning and classifier scores), and also demonstrate the importance of spatial stacking and L2 normalisation. The results are shown in Table 1. As can be seen, both spatial
stacking and L2 normalisation improve the performance, and dimensionality reduction via projection onto the space of SVM classifier scores performs on par with the projection learnt using the
bi-convex formulation (2). In the following experiments we used the classifier scores for dimensionality reduction, since their training can be parallelised and is significantly faster.
Multi-scale pooling and multi-layer image representation. In this experiment, we compare the
performance of semi-local FV pooling using single and multiple window sizes (Sect. 3), as well as
single- and multi-layer image representations (Sect. 4). From Table 2 it is clear that using multiple pooling window sizes is beneficial compared to a single window size. When using multi-scale
pooling, the pooling stride was increased to keep the number of pooled semi-local FVs roughly the
same. Also, the multi-layer image descriptor obtained by stacking globally pooled and normalised
FVs, computed by the two Fisher layers, outperforms each of these FVs taken separately. We also
note that in this experiment, unlike the previous one, both Fisher layers utilized spatial coordinate
augmentation of the input features, which leads to a noticeable boost in the shallow baseline performance (from 78.9% to 80.50% top-5 accuracy). Apart from our Fisher network, multi-scale pooling
can be readily employed in convolutional networks.
6.2
Evaluation on ILSVRC-2010
Now that we have evaluated various Fisher layer configurations on a subset of ILSVRC, we assess
the performance of our framework on the full ILSVRC-2010 dataset. We use off-the-shelf SIFT and
colour features [20] in the feature extraction layer, and demonstrate that significant improvements
can be achieved by injecting a single Fisher layer into the conventional FV-based pipeline [23].
The following configuration of Fisher layers was used: d1 = 80, K1 = 512, q1 = {5, 7, 9, 11},
?1 = 2, h1 = 1000, d2 = 256, K2 = 256. On both Fisher layers, we used spatial coordinate
augmentation of the input features. The first Fisher layer uses a large number of GMM components
Kl , since it was found to be beneficial for shallow FV encodings [23], used here as a baseline. The
one-vs-rest SVM scores were Platt-calibrated on the validation set (we did not use calibration for
semi-local FV dimensionality reduction).
The results are shown in Table 3. First, we note that the globally pooled Fisher vector, branched
out of the first Fisher layer (which effectively corresponds to the conventional FV encoding [23]),
results in better accuracy than reported in [23], which validates our implementation. Using the 2nd
Fisher layer on top of the 1st one leads to a significant performance improvement. Finally, stacking
the FVs, produced by the 1st and 2nd Fisher layers, pushes the accuracy even further.
7
Table 3: Performance on ILSVRC-2010 using dense SIFT and colour features. We also specify
the dimensionality of SIFT-based image representations.
pipeline
setting
1st Fisher layer
2nd Fisher layer
multi-layer (1st and 2nd Fisher layers)
S?anchez et al. [23]
SIFT only
dimension top-1
82K
46.52
131K
48.54
213K
52.57
524K
N/A
top-5
68.45
71.35
73.68
67.9
SIFT & colour
top-1 top-5
55.35 76.35
56.20 77.68
59.47 79.20
54.3
74.3
The state of the art on the ILSVRC-2010 dataset was obtained using an 8-layer convolutional network [14], i.e. twice as deep as the Fisher network considered here. Using training and test set
augmentation based on jittering (not employed here), they achieved the top-1 / top-5 accuracy of
62.5% / 83.0%. Without test set augmentation (i.e. using only the original images for class scoring),
their result is 61% / 81.7%. In our case, we did not augment neither the training, nor the test set, and
achieved 59.5% / 79.2%. For reference, our baseline shallow FV accuracy is 55.4% / 76.4%. We
conclude that injecting a single intermediate layer leads to a significant performance boost (+4.1%
top-1 accuracy), but deep CNNs are still somewhat better (+1.5% top-1 accuracy). These results are
however quite encouraging since they were obtained by using off-the-shelf features and encodings,
reconfigured to add a single intermediate layer. Notably, our model did not require an optimised
GPU implementation, nor it was necessary to control over-fitting by techniques such as dropout [14]
and training set augmentation.
Finally, we demonstrate that the Fisher network and deep CNN representations are complementary
by combining the class posteriors obtained from CNN with those of a Fisher network. To this end,
we re-implemented the deep CNN of [14] using their publicly available cuda-convnet toolbox. Our
implementation performs slightly better, giving 62.91% / 83.19% (with test set augmentation). The
multiplication of CNN and Fisher network posteriors leads to a significantly improved accuracy:
66.75% / 85.64%. It should be noted that another way of improving the CNN accuracy, used in [14]
on ImageNet-2012 dataset, consists in training several CNNs and averaging their posteriors. Further
study of the complementarity of various deep and shallow representations is beyond the scope of
this paper, and will be addressed in future research.
7
Conclusion
We have shown that Fisher vectors, a standard image encoding method, are amenable to be stacked in
multiple layers, in analogy to the state-of-the-art deep neural network architectures. Adding a single
layer is in fact sufficient to significantly boost the performance of these shallow image encodings,
bringing their performance closer to the state of the art in the large-scale classification scenario [14].
The fact that off-the-shelf image representations can be simply and successfully stacked indicates
that deep schemes may extend well beyond neural networks.
Acknowledgements
This work was supported by ERC grant VisRec no. 228180.
References
[1] A. Agarwal and B. Triggs. Hyperfeatures - multilevel local coding for visual recognition. In Proc. ECCV,
pages 30?43, 2006.
[2] R. Arandjelovi?c and A. Zisserman. Three things everyone should know to improve object retrieval. In
Proc. CVPR, 2012.
[3] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
NIPS, pages 153?160, 2006.
[4] A Berg, J Deng, and L Fei-Fei. Large scale visual recognition challenge (ILSVRC), 2010. URL http:
//www.image-net.org/challenges/LSVRC/2010/.
[5] J. Carreira, R. Caseiro, J. Batista, and C. Sminchisescu. Semantic segmentation with second-order pooling. In Proc. ECCV, pages 430?443, 2012.
[6] K. Chatfield, V. Lempitsky, A. Vedaldi, and A. Zisserman. The devil is in the details: an evaluation of
recent feature encoding methods. In Proc. BMVC., 2011.
8
[7] D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
In Proc. CVPR, pages 3642?3649, 2012.
[8] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning.
In Proc. AISTATS, 2011.
[9] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In Workshop on
Statistical Learning in Computer Vision, ECCV, pages 1?22, 2004.
[10] A. Gordo, J. A. Rodr??guez-Serrano, F. Perronnin, and E. Valveny. Leveraging category-level labels for
instance-level image retrieval. In Proc. CVPR, pages 3045?3052, 2012.
[11] B. Hariharan, J. Malik, and D. Ramanan. Discriminative decorrelation for clustering and classification.
In Proc. ECCV, 2012.
[12] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[13] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In NIPS, pages
487?493, 1998.
[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, pages 1106?1114, 2012.
[15] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In Proc. CVPR, pages 951?958, 2009.
[16] S. Lazebnik, C. Schmid, and J Ponce. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In Proc. CVPR, 2006.
[17] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building high-level
features using large scale unsupervised learning. In Proc. ICML, 2012.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[19] D. Lowe. Object recognition from local scale-invariant features. In Proc. ICCV, pages 1150?1157, Sep
1999.
[20] F. Perronnin, J. S?anchez, and T. Mensink. Improving the Fisher kernel for large-scale image classification.
In Proc. ECCV, 2010.
[21] F. Perronnin, Z. Akata, Z. Harchaoui, and C. Schmid. Towards good practice in large-scale learning for
image classification. In Proc. CVPR, pages 3482?3489, 2012.
[22] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience,
2(11):1019?1025, 1999.
[23] J. S?anchez and F. Perronnin. High-dimensional signature compression for large-scale image classification.
In Proc. CVPR, 2011.
[24] J. S?anchez, F. Perronnin, and T. Em??dio de Campos. Modeling the spatial layout of images beyond spatial
pyramids. Pattern Recognition Letters, 33(16):2216?2223, 2012.
[25] P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale convolutional networks. In International Joint Conference on Neural Networks, pages 2809?2813, 2011.
[26] T. Serre, L. Wolf, and T. Poggio. A new biologically motivated framework for robust object recognition.
Proc. CVPR, 2005.
[27] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient SOlver for SVM.
In Proc. ICML, volume 227, 2007.
[28] K. Simonyan, O. M. Parkhi, A. Vedaldi, and A. Zisserman. Fisher Vector Faces in the Wild. In Proc.
BMVC., 2013.
[29] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In Proc.
ICCV, volume 2, pages 1470?1477, 2003.
[30] L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient object category recognition using classemes. In
Proc. ECCV, pages 776?789, sep 2010.
[31] J. Weston, S. Bengio, and N. Usunier. WSABIE: Scaling up to large vocabulary image annotation. In
Proc. IJCAI, pages 2764?2770, 2011.
[32] S. Yan, X. Xu, D. Xu, S. Lin, and X. Li. Beyond spatial pyramids: A new feature extraction framework
with dense spatial sampling for image classification. In Proc. ECCV, pages 473?487, 2012.
[33] J. Yang, K. Yu, Y. Gong, and T. S. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In Proc. CVPR, pages 1794?1801, 2009.
9
| 4926 |@word cnn:10 briefly:1 version:1 compression:1 norm:6 nd:7 c0:1 triggs:1 d2:3 seek:1 covariance:4 q1:3 mammal:1 reduction:17 configuration:4 contains:1 score:9 batista:1 document:1 outperforms:1 comparing:1 guez:1 readily:2 gpu:2 devin:1 additive:1 subsequent:1 designed:2 v:7 generative:2 prohibitive:2 greedy:1 core:2 codebook:3 location:2 organising:1 org:1 constructed:1 become:1 consists:2 fitting:3 wild:1 manner:2 introduce:3 notably:1 indeed:3 roughly:1 andrea:1 nor:2 multi:20 inspired:1 globally:4 cpu:1 encouraging:1 window:9 considering:2 solver:1 begin:2 project:1 linearity:1 agnostic:1 advent:1 substantially:1 proposing:1 whilst:1 finding:1 transformation:2 impractical:1 every:1 tackle:1 classifier:17 k2:4 uk:1 platt:1 control:1 grant:1 ramanan:1 before:1 local:26 bilinear:1 encoding:40 oxford:1 optimised:1 signed:1 might:1 twice:1 suggests:2 challenging:1 limited:1 bi:5 speeded:1 practical:1 harmeling:1 lecun:2 testing:1 practice:4 differs:1 fitzgibbon:1 procedure:3 area:2 yan:1 significantly:8 vedaldi:4 composite:1 projection:26 word:3 matching:3 regular:1 onto:6 pegasos:1 layered:1 optimize:1 conventional:9 www:1 demonstrated:1 dean:1 layout:1 independently:2 convex:6 haussler:1 d1:3 lamblin:1 initialise:1 coordinate:5 hierarchy:3 target:1 us:3 complementarity:1 element:1 recognition:13 utilized:1 solved:1 capture:6 thousand:1 region:4 sect:25 ranzato:1 trade:1 decrease:1 rescaled:1 complexity:3 jittering:1 trained:10 signature:1 basis:1 easily:1 sep:2 joint:1 various:3 represented:2 stacked:5 train:3 additivity:1 fast:5 aggregate:3 shalev:1 quite:2 encoded:1 widely:3 larger:3 cvpr:9 otherwise:1 compressed:4 encoder:8 ability:1 simonyan:2 statistic:3 unseen:1 validates:1 final:1 advantage:2 net:2 product:1 serrano:1 combining:1 bow:3 achieve:1 description:4 az:1 exploiting:1 sutskever:1 cluster:1 p:1 optimum:1 valveny:1 produce:3 categorization:1 ijcai:1 staying:1 object:9 illustrate:1 andrew:1 ac:1 gong:1 vc0:1 noticeable:1 borrowed:1 dividing:1 implemented:2 larochelle:1 closely:2 annotated:1 attribute:4 cnns:3 stochastic:1 vc:6 require:5 dio:1 dnns:4 multilevel:1 generalization:1 exploring:1 considered:2 ground:1 scope:1 predict:1 gordo:1 adopt:1 theshelf:1 omitted:1 purpose:2 proc:23 injecting:3 applicable:2 bag:5 label:3 bridge:1 individually:1 wl:13 successfully:1 offs:1 gaussian:4 modified:1 rather:1 reconfigured:1 avoid:1 shelf:5 jaakkola:1 encode:1 ponce:1 improvement:3 likelihood:2 indicates:1 contrast:1 greedily:2 baseline:6 sense:1 detect:1 dim:1 perronnin:5 dnn:2 pixel:3 issue:1 classification:25 arg:1 rodr:1 stateof:1 augment:2 retaining:1 art:6 spatial:27 special:1 uc:2 construct:1 once:1 extraction:5 ng:2 sampling:2 kw:1 yu:1 k2f:1 unsupervised:2 icml:2 future:1 others:1 report:1 quantitatively:1 torresani:1 few:1 employ:1 modern:1 simultaneously:1 densely:9 preserve:1 individual:4 maxj:1 geometry:1 attempt:1 normalisation:8 evaluation:6 mixture:3 behind:1 primal:2 amenable:2 closer:1 necessary:1 poggio:2 desired:1 re:1 fitted:1 increased:2 instance:3 soft:2 column:1 modeling:1 cover:1 assignment:4 cost:1 stacking:20 subset:5 predictor:1 krizhevsky:1 recognizing:1 osindero:1 too:2 reported:1 arandjelovi:1 encoders:2 learnt:3 nickisch:1 combined:1 calibrated:1 st:6 caseiro:1 international:1 sequel:1 lee:1 off:6 together:1 quickly:1 augmentation:9 huang:1 derivative:1 li:1 account:1 de:1 stride:2 coding:6 pooled:4 depends:1 performed:1 view:1 utilised:1 responsibility:1 h1:3 csurka:1 lowe:1 red:1 competitive:3 traffic:1 option:2 parallel:3 annotation:3 contribution:1 ass:3 square:4 hariharan:1 accuracy:17 convolutional:11 descriptor:4 publicly:1 efficiently:2 ensemble:2 yield:3 correspond:1 identify:1 weak:1 raw:3 modelled:1 produced:1 converged:1 decorrelated:3 parallelised:2 evaluates:2 ty:1 sampled:3 dataset:7 realm:1 improves:4 dimensionality:29 segmentation:1 akata:1 feed:3 higher:2 supervised:3 day:1 zisserman:5 improved:3 specify:1 bmvc:2 formulation:3 evaluated:1 ox:1 mensink:1 just:1 stage:2 hand:2 receives:1 overlapping:1 google:1 brings:2 facilitate:1 building:1 k22:1 requiring:1 serre:1 assigned:3 spatially:4 semantic:1 adjacent:3 during:2 width:1 noted:3 risen:1 demonstrate:4 complete:1 performs:3 image:78 wise:1 lazebnik:1 recently:4 volume:2 extend:1 significant:3 tuning:1 grid:1 similarly:2 erc:1 centre:1 overtly:1 robot:1 calibration:1 impressive:1 cortex:2 whitening:2 quantisation:1 add:1 posterior:4 recent:1 apart:2 massively:2 scenario:2 store:1 schmidhuber:1 binary:1 success:1 discussing:1 alternation:3 devise:1 scoring:2 seen:4 additional:1 somewhat:1 rooting:1 employed:5 deng:1 aggregated:1 corrado:1 ii:3 semi:13 multiple:6 full:4 branch:1 keypoints:1 harchaoui:1 generalises:1 faster:2 retrieval:3 lin:1 prediction:2 variant:4 vision:6 optimisation:2 normalization:1 mex:1 monga:1 pyramid:7 achieved:5 rhl:2 agarwal:1 kernel:1 separately:1 addressed:1 campos:1 rest:9 unlike:1 bringing:1 pooling:25 thing:1 flow:1 gmms:2 leveraging:1 call:2 yang:1 intermediate:2 iii:1 enough:1 easy:1 bengio:3 variety:1 architecture:12 ciresan:1 classemes:3 idea:3 codebooks:4 reduce:1 haffner:1 whether:1 motivated:1 pca:10 chatfield:1 colour:4 passed:1 ul:2 url:1 karen:2 passing:4 hardly:1 matlab:1 deep:23 generally:1 clear:1 svms:3 category:5 reduced:2 http:1 outperform:1 hyperfeatures:1 coates:1 cuda:1 sign:1 neuroscience:1 estimated:1 popularity:1 per:4 blue:1 broadly:1 group:1 key:2 changing:1 gmm:12 neither:1 branched:2 vast:2 year:1 sum:4 letter:1 place:1 family:3 patch:6 scaling:1 dropout:1 layer:104 followed:3 distinguish:1 fan:1 bray:1 constraint:1 fei:2 scene:1 encodes:1 performing:1 separable:1 relatively:1 gpus:2 combination:1 kd:1 smaller:2 describes:3 slightly:2 beneficial:2 em:1 wsabie:1 shallow:10 making:2 s1:2 biologically:1 hl:11 explained:4 invariant:3 restricted:1 iccv:2 pipeline:10 taken:1 previously:2 describing:1 turn:1 discus:1 needed:1 know:1 singer:1 end:2 serf:2 jected:1 usunier:1 available:4 gaussians:4 unreasonable:1 hierarchical:1 indirectly:1 occurrence:2 neighbourhood:5 alternative:1 coin:1 slower:1 original:2 top:22 clustering:1 exploit:2 giving:1 concatenated:1 k1:3 rdl:1 objective:3 malik:1 question:1 already:1 parametric:1 diagonal:4 exhibit:1 gradient:4 subspace:4 convnet:1 penultimate:1 w0:1 extent:1 reason:1 code:1 sermanet:1 ql:5 mostly:1 implementation:7 boltzmann:1 teh:1 anchez:4 datasets:2 discarded:1 benchmark:3 descent:1 riesenhuber:1 optional:1 defining:1 hinton:2 incorporated:1 ssr:4 discovered:1 introduced:1 meier:1 required:1 kl:7 toolbox:1 imagenet:7 sivic:1 fv:45 boost:3 nip:3 address:1 able:1 beyond:5 usually:1 below:1 pattern:1 challenge:4 built:1 including:2 max:4 optimise:1 everyone:1 belief:1 video:2 suitable:1 decorrelation:1 natural:1 hybrid:1 scheme:2 improve:6 brief:1 carried:6 auto:1 extract:1 coupled:1 unnormalised:2 schmid:2 text:1 review:1 literature:3 l2:11 acknowledgement:1 popovici:1 multiplication:1 relative:1 regularisation:1 fully:1 par:1 discriminatively:5 organised:1 analogy:2 srebro:1 validation:3 integrate:1 sufficient:1 xp:16 proxy:3 storing:1 translation:1 row:1 eccv:7 supported:1 last:1 normalised:5 deeper:3 onevs:1 taking:1 face:1 sparse:7 depth:2 dimension:1 vocabulary:1 avoids:1 rich:1 computes:3 forward:2 made:1 projected:4 employing:2 obtains:1 compact:1 keep:3 global:11 visrec:1 assumed:1 conclude:1 discriminative:13 shwartz:1 table:6 learn:1 concatenates:1 transfer:1 nature:1 inherently:1 robust:1 improving:2 sminchisescu:1 excellent:1 complex:3 bottou:1 protocol:1 did:3 pk:1 dense:15 aistats:1 linearly:1 arrow:2 whole:5 lampert:1 repeated:1 complementary:3 xu:2 crafted:2 fig:5 sub:16 down:1 specific:2 sift:21 svm:12 dl:14 incorporating:1 exists:1 workshop:1 adding:1 effectively:3 importance:2 ci:1 push:1 margin:2 chen:1 depicted:1 simply:1 explore:1 absorbed:1 visual:10 corresponds:5 truth:1 utilise:1 wolf:1 weston:1 lempitsky:1 towards:1 fisher:74 change:1 hard:6 lsvrc:1 typical:1 carreira:1 parkhi:1 averaging:2 invariance:1 disregard:1 fvs:24 rootsift:1 ilsvrc:13 berg:1 support:2 vct:1 latter:3 szummer:1 devil:1 incorporate:1 evaluate:2 dance:1 |
4,339 | 4,927 | Sinkhorn Distances:
Lightspeed Computation of Optimal Transport
Marco Cuturi
Graduate School of Informatics, Kyoto University
[email protected]
Abstract
Optimal transport distances are a fundamental family of distances for probability
measures and histograms of features. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost can quickly become prohibitive whenever the size of the support of these measures or the histograms? dimension exceeds a few hundred. We propose in this work a new family
of optimal transport distances that look at transport problems from a maximumentropy perspective. We smooth the classic optimal transport problem with an
entropic regularization term, and show that the resulting optimum is also a distance which can be computed through Sinkhorn?s matrix scaling algorithm at a
speed that is several orders of magnitude faster than that of transport solvers. We
also show that this regularized distance improves upon classic optimal transport
distances on the MNIST classification problem.
1 Introduction
Choosing a suitable distance to compare probabilities is a key problem in statistical machine learning. When little is known on the probability space on which these probabilities are supported, various
information divergences with minimalistic assumptions have been proposed to play that part, among
which the Hellinger, ?2 , total variation or Kullback-Leibler divergences. When the probability
space is a metric space, optimal transport distances (Villani, 2009, ?6), a.k.a. earth mover?s (EMD)
in computer vision (Rubner et al., 1997), define a more powerful geometry to compare probabilities.
This power comes, however, with a heavy computational price tag. No matter what the algorithm
employed ? network simplex or interior point methods ? the cost of computing optimal transport
distances scales at least in O(d3 log(d)) when comparing two histograms of dimension d or two
point clouds each of size d in a general metric space (Pele and Werman, 2009, ?2.1).
In the particular case that the metric probability space of interest can be embedded in Rn and n is
small, computing or approximating optimal transport distances can become reasonably cheap. Indeed, when n = 1, their computation only requires O(d log d) operations. When n ? 2, embeddings
of measures can be used to approximate them in linear time (Indyk and Thaper, 2003; Grauman and
Darrell, 2004; Shirdhonkar and Jacobs, 2008) and network simplex solvers can be modified to run
in quadratic time (Gudmundsson et al., 2007; Ling and Okada, 2007). However, the distortions of
such embeddings (Naor and Schechtman, 2007) as well as the exponential increase of costs incurred
by such modifications as n grows make these approaches inapplicable when n exceeds 4. Outside of
the perimeter of these cases, computing a single distance between a pair of measures supported by
a few hundred points/bins in an arbitrary metric space can take more than a few seconds on a single
CPU. This issue severely hinders the applicability of optimal transport distances in large-scale data
analysis and goes as far as putting into question their relevance within the field of machine learning,
where high-dimensional histograms and measures in high-dimensional spaces are now prevalent.
1
We show in this paper that another strategy can be employed to speed-up optimal transport, and even
potentially define a better distance in inference tasks. Our strategy is valid regardless of the metric
characteristics of the original probability space. Rather than exploit properties of the metric probability space of interest (such as embeddability in a low-dimensional Euclidean space) we choose to
focus directly on the original transport problem, and regularize it with an entropic term. We argue
that this regularization is intuitive given the geometry of the optimal transport problem and has,
in fact, been long known and favored in transport theory to predict traffic patterns (Wilson, 1969).
From an optimization point of view, this regularization has multiple virtues, among which that of
turning the transport problem into a strictly convex problem that can be solved with matrix scaling
algorithms. Such algorithms include Sinkhorn?s celebrated fixed point iteration (1967), which is
known to have a linear convergence (Franklin and Lorenz, 1989; Knight, 2008). Unlike other iterative simplex-like methods that need to cycle through complex conditional statements, the execution
of Sinkhorn?s algorithm only relies on matrix-vector products. We propose a novel implementation
of this algorithm that can compute simultaneously the distance of a single point to a family of points
using matrix-matrix products, and which can therefore be implemented on GPGPU architectures.
We show that, on the benchmark task of classifying MNIST digits, regularized distances perform
better than standard optimal transport distances, and can be computed several orders of magnitude
faster.
This paper is organized as follows: we provide reminders on optimal transport theory in Section 2,
introduce Sinkhorn distances in Section 3 and provide algorithmic details in Section 4. We follow
with an empirical study in Section 5 before concluding.
2 Reminders on Optimal Transport
Transport Polytope and Interpretation as a Set of Joint Probabilities In what follows, h?, ?i
stands for the Frobenius dot-product. For two probability vectors r and c in the simplex ?d := {x ?
Rd+ : xT 1d = 1}, where 1d is the d dimensional vector of ones, we write U (r, c) for the transport
polytope of r and c, namely the polyhedral set of d ? d matrices,
U (r, c) := {P ? Rd?d
| P 1d = r, P T 1d = c}.
+
U (r, c) contains all nonnegative d ? d matrices with row and column sums r and c respectively.
U (r, c) has a probabilistic interpretation: for X and Y two multinomial random variables taking
values in {1, ? ? ? , d}, each with distribution r and c respectively, the set U (r, c) contains all possible
joint probabilities of (X, Y ). Indeed, any matrix P ? U (r, c) can be identified with a joint probability for (X, Y ) such that p(X = i, Y = j) = pij . We define the entropy h and the Kullback-Leibler
divergences of P, Q ? U (r, c) and a marginals r ? ?d as
h(r) = ?
d
X
i=1
ri log ri ,
h(P ) = ?
d
X
pij log pij ,
KL(P kQ) =
X
ij
i,j=1
pij log
pij
.
qij
Optimal Transport Distance Between r and c Given a d ? d cost matrix M , the cost of mapping
r to c using a transport matrix (or joint probability) P can be quantified as hP, M i. The problem
defined in Equation (1)
(1)
dM (r, c) := min hP, M i.
P ?U(r,c)
is called an optimal transport (OT) problem between r and c given cost M . An optimal table P ?
for this problem can be obtained, among other approaches, with the network simplex (Ahuja et al.,
1993, ?9). The optimum of this problem, dM (r, c), is a distance between r and c (Villani, 2009,
?6.1) whenever the matrix M is itself a metric matrix, namely whenever M belongs to the cone of
distance matrices (Avis, 1980; Brickell et al., 2008):
M = {M ? Rd?d
: ?i, j ? d, mij = 0 ? i = j, ?i, j, k ? d, mij ? mik + mkj }.
+
For a general matrix M , the worst case complexity of computing that optimum scales in O(d3 log d)
for the best algorithms currently proposed, and turns out to be super-cubic in practice as well (Pele
and Werman, 2009, ?2.1).
2
3 Sinkhorn Distances: Optimal Transport with Entropic Constraints
Entropic Constraints on Joint Probabilities The following information theoretic inequality
(Cover and Thomas, 1991, ?2) for joint probabilities
?r, c ? ?d , ?P ? U (r, c), h(P ) ? h(r) + h(c),
is tight, since the independence table rcT (Good, 1963) has entropy h(rcT ) = h(r) + h(c). By the
concavity of entropy, we can introduce the convex set
U? (r, c) := {P ? U (r, c) | KL(P krcT ) ? ?} = {P ? U (r, c) | h(P ) ? h(r)+h(c)??} ? U (r, c).
These two definitions are indeed equivalent, since one can easily check that KL(P krcT ) = h(r) +
h(c) ? h(P ), a quantity which is also the mutual information I(XkY ) of two random variables
(X, Y ) should they follow the joint probability P (Cover and Thomas, 1991, ?2). Hence, the set of
tables P whose Kullback-Leibler divergence to rcT is constrained to lie below a certain threshold
can be interpreted as the set of joint probabilities P in U (r, c) which have sufficient entropy with
respect to h(r) and h(c), or small enough mutual information. For reasons that will become clear in
Section 4, we call the quantity below the Sinkhorn distance of r and c:
Definition 1 (Sinkhorn Distance). dM,? (r, c) := min hP, M i
P ?U? (r,c)
Why consider an entropic constraint in optimal transport? The first reason is computational, and is
detailed in Section 4. The second reason is built upon the following intuition. As a classic result
of linear optimization, the OT problem is always solved on a vertex of U (r, c). Such a vertex is
a sparse d ? d matrix with only up to 2d ? 1 non-zero elements (Brualdi, 2006, ?8.1.3). From a
probabilistic perspective, such vertices are quasi-deterministic joint probabilities, since if pij > 0,
then very few probabilities pij ? for j 6= j ? will be non-zero in general. Rather than considering
such outliers of U (r, c) as the basis of OT distances, we propose to restrict the search for low cost
joint probabilities to tables with sufficient smoothness. Note that this is equivalent to considering
the maximum-entropy principle (Jaynes, 1957; Darroch and Ratcliff, 1972) if we were to maximize
entropy while keeping the transportation cost constrained.
Before proceeding to the description of the properties of Sinkhorn distances, we note that Ferradans
et al. (2013) have recently explored similar ideas. They relax and penalize (through graph-based
norms) the original transport problem to avoid undesirable properties exhibited by the original optima in the problem of color matching. Combined, their idea and ours suggest that many more
smooth regularizers will be worth investigating to solve the the OT problem, driven by either or both
computational and modeling motivations.
Metric Properties of Sinkhorn Distances When ? is large enough, the Sinkhorn distance coincides with the classic OT distance. When ? = 0, the Sinkhorn distance has a closed form and
becomes a negative definite kernel if one assumes that M is itself a negative definite distance, or
equivalently a Euclidean distance matrix1 .
Property 1. For ? large enough, the Sinkhorn distance dM,? is the transport distance dM .
Proof. Since for any P ? U (r, c), h(P ) is lower bounded by 21 (h(r) + h(c)), we have that for ?
large enough U? (r, c) = U (r, c) and thus both quantities coincide.
Property 2 (Independence Kernel). dM,0 = rT M c. If M is a Euclidean distance matrix, dM,0 is a
negative definite kernel and e?tdM,0 , the independence kernel, is positive definite for all t > 0.
The proof is provided in the appendix. Beyond these two extreme cases, the main theorem of this
section states that Sinkhorn distances are symmetric and satisfy triangle inequalities for all possible
values of ?. Since for ? small enough dM,? (r, r) > 0 for any r such that h(r) > 0, Sinkhorn
distances cannot satisfy the coincidence axiom (d(x, y) = 0 ? x = y holds for all x, y). However,
multiplying dM,? by 1r6=c suffices to recover the coincidence property if needed.
Theorem 1. For all ? ? 0 and M ? M, dM,? is symmetric and satisfies all triangle inequalities.
The function (r, c) 7? 1r6=c dM,? (r, c) satisfies all three distance axioms.
1
?n, ??1 , ? ? ? , ?d ? Rn such that mij = k?i ? ?j k22 . Recall that, in that case, M raised to power t
element-wise, [mtij ], 0 < t < 1 is also a Euclidean distance matrix (Berg et al., 1984, p.78,?3.2.10).
3
U (r, c)
M
P ?max
machine-precision
limit
?=0
rcT
???
U?(r, c)
P?
P
?
P?= argmin hP, M i
P?
?
P = argminhP, M i
P?
P ?U? (r,c)
?
P =
P ?U(r,c)
argminhP, M i? ?1 h(P )
P ?U(r,c)
Rd?d
Figure 1: Transport polytope U (r, c) and Kullback-Leibler ball U? (r, c) of level ? centered
around rcT . This drawing implicitly assumes that the optimal transport P ? is unique. The Sinkhorn
distance dM,? (r, c) is equal to hP? , M i, the minimum of the dot product with M on that ball. For ?
large enough, both objectives coincide, as U? (r, c) gradually overlaps with U (r, c) in the vicinity
of P ? . The dual-sinkhorn distance d?M (r, c), the minimum of the transport problem regularized by
minus the entropy divided by ?, reaches its minimum at a unique solution P ? , forming a regularization path for varying ? from rcT to P ? . For a given value of ?, and a pair (r, c) there exists
? ? [0, ?] such that both d?M (r, c) and dM,? (r, c) coincide. d?M can be efficiently computed using
Sinkhorn?s fixed point iteration (1967). Although the convergence to P ? of this fixed point iteration
is theoretically guaranteed as ? ? ?, the procedure cannot work beyond a problem-dependent
value ?max beyond which some entries of e??M are represented as zeroes in memory.
The gluing lemma (Villani, 2009, p.19) is key to proving that OT distances are indeed distances. We
propose a variation of this lemma to prove our result:
Lemma 1 (Gluing Lemma With Entropic Constraint). Let ? ?P0 and x, y, z ? ?d . Let P ?
p q
U? (x, y) and Q ? U? (y, z). Let S be the d ? d defined as sik := j ijyjjk . Then S ? U? (x, z).
The proof is provided in the appendix. We can prove the triangle inequality for dM,? by using the
same proof strategy than that used for classic transport distances:
Proof of Theorem 1. The symmetry of dM,? is a direct result of M ?s symmetry. Let x, y, z be three
elements in ?d . Let P ? U? (x, y) and Q ? U? (y, z) be two optimal solutions for dM,? (x, y) and
dM,? (y, z) respectively. Using the matrix S of U? (x, z) provided in Lemma 1, we proceed with the
following chain of inequalities:
X
X pij qjk
X
pij qjk
dM,? (x, z) = min hP, M i ? hS, M i =
mik
?
(mij + mjk )
P ?U? (x,z)
y
yj
j
j
ik
ijk
X
X qjk X
X
X pij
pij qjk
pij qjk
=
mij
mij pij
+ mjk
=
+
mjk qjk
yj
yj
yj
yj
ij
i
ijk
k
jk
X
X
mij pij +
mjk qjk = dM,? (x, y) + dM,? (y, z).
=
ij
jk
4 Computing Regularized Transport with Sinkhorn?s Algorithm
We consider in this section a Lagrange multiplier for the entropy constraint of Sinkhorn distances:
For ? > 0, d?M (r, c) := hP ? , M i, where P ? = argmin hP, M i ?
P ?U(r,c)
1
h(P ).
?
(2)
By duality theory we have that to each ? corresponds a ? ? [0, ?] such that dM,? (r, c) = d?M (r, c)
holds for that pair (r, c). We call d?M the dual-Sinkhorn divergence and show that it can be computed
4
for a much cheaper cost than the original distance dM . Figure 1 summarizes the relationships between dM , dM,? and d?M . Since the entropy of P ? decreases monotonically with ?, computing dM,?
can be carried out by computing d?M with increasing values of ? until h(P ? ) reaches h(r)+h(c)??.
We do not consider this problem here and only use the dual-Sinkhorn divergence in our experiments.
Computing d?M with Matrix Scaling Algorithms Adding an entropy regularization to the optimal transport problem enforces a simple structure on the optimal regularized transport P ? :
Lemma 2. For ? > 0, the solution P ? is unique and has the form P ? = diag(u)K diag(v),
where u and v are two non-negative vectors of Rd uniquely defined up to a multiplicative factor and
K := e??M is the element-wise exponential of ??M .
Proof. The existence and unicity of P ? follows from the boundedness of U (r, c) and the strict
convexity of minus the entropy. The fact that P ? can be written as a rescaled version of K is a well
known fact in transport theory (Erlander and Stewart, 1990, ?3.3): let L(P, ?, ?) be the Lagrangian
of Equation (2) with dual variables ?, ? ? Rd for the two equality constraints in U (r, c):
X1
pij log pij + pij mij + ?T (P 1d ? r) + ? T (P T 1d ? c).
L(P, ?, ?) =
?
ij
For any couple (i, j), (?L/?pij = 0) ? pij = e?1/2???i e??mij e?1/2???j . Since K is
strictly positive, Sinkhorn?s theorem (1967) states that there exists a unique matrix of the form
diag(u)K diag(v) that belongs to U (r, c), where u, v ? 0d . P ? is thus necessarily that matrix,
and can be computed with Sinkhorn?s fixed point iteration (u, v) ? (r./Kv, c./K ? u).
Given K and marginals r and c, one only needs to iterate Sinkhorn?s update a sufficient number
of times to converge to P ? . One can show that these successive updates carry out iteratively the
projection of K on U (r, c) in the Kullback-Leibler sense. This fixed point iteration can be written
as a single update u ? r./K(c./K ? u). When r > 0d , diag(1./r)K can be stored in a d ? d matrix
?
? to save one Schur vector product operation with the update u ? 1./(K(c./K
?
K
u)). This can be
easily ensured by selecting the positive indices of r, as seen in the first line of Algorithm 1.
Algorithm 1 Computation of d = [d?M (r, c1 ), ? ? ? , d?M (r, cN )], using Matlab syntax.
Input M, ?, r, C := [c1 , ? ? ? , cN ].
I = (r > 0); r = r(I); M = M (I, :); K = exp(??M )
u = ones(length(r), N )/length(r);
? = bsxfun(@rdivide, K, r) % equivalent to K
? = diag(1./r)K
K
while u changes or any other relevant stopping criterion do
?
?
u = 1./(K(C./(K
u)))
end while
v = C./(K ? u)
d = sum(u. ? ((K. ? M )v)
Parallelism, Convergence and Stopping Criteria As can be seen right above, Sinkhorn?s algorithm can be vectorized and generalized to N target histograms c1 , ? ? ? , cN . When N = 1 and C
is a vector in Algorithm 1, we recover the simple iteration mentioned in the proof of Lemma 2.
When N > 1, the computations for N target histograms can be simultaneously carried out by updating a single matrix of scaling factors u ? Rd?N
instead of updating a scaling vector u ? Rd+ .
+
This important observation makes the execution of Algorithm 1 particularly suited to GPGPU platforms. Despite ongoing research in that field (Bieling et al., 2010) such speed ups have not been yet
achieved on complex iterative procedures such as the network simplex. Using Hilbert?s projective
metric, Franklin and Lorenz (1989) prove that the convergence of the scaling factor u (as well as v)
is linear, with a rate bounded above by ?(K)2 , where
p
?(K) ? 1
Kil Kjm
?(K) = p
.
< 1, and ?(K) = max
i,j,l,m Kjl Kim
?(K) + 1
The upper bound ?(K) tends to 1 as ? grows, and we do observe a slower convergence as P ? gets
closer to the optimal vertex P ? (or the optimal facet of U (r, c) if it is not unique). Different stopping
criteria can be used for Algorithm 1. We consider two in this work, which we detail below.
5
5 Experimental Results
MNIST Digits We test the performance of
dual-Sinkhorn divergences on the MNIST
digits dataset. Each image is converted
to a vector of intensities on the 20 ? 20
pixel grid, which are then normalized to
sum to 1. We consider a subset of N ?
{3, 5, 12, 17, 25} ? 103 points in the dataset.
For each subset, we provide mean and standard deviation of classification error using
a 4 fold (3 test, 1 train) cross validation
(CV) scheme repeated 6 times, resulting
in 24 different experiments. Given a distance d, we form the kernel e?d/t , where
t > 0 is chosen by CV on each training fold within {1, q10 (d), q20 (d), q50 (d)},
where qs is the s% quantile of a subset of
distances observed in that fold. We regularize non-positive definite kernel matrices Figure 2: Average test errors with shaded confiresulting from this computation by adding dence intervals. Errors are computed using 1/4 of the
a sufficiently large diagonal term. SVM?s dataset for train and 3/4 for test. Errors are averaged
were run with Libsvm (one-vs-one) for mul- over 4 folds ? 6 repeats = 24 experiments.
ticlass classification. We select the regularization C in 10{?2,0,4} using 2 folds/2 repeats CV on the training fold. We consider the Hellinger,
?2 , total variation and squared Euclidean (Gaussian kernel) distances. M is the 400 ? 400 matrix
of Euclidean distances between the 20 ? 20 bins in the grid. We also tried Mahalanobis distances
on this example using exp(-tM.?2), t>0 as well as its inverse, with varying values of t, but
none of these results proved competitive. For the Independence kernel we considered [maij ] where
a ? {0.01, 0.1, 1} is chosen by CV on each training fold. We select ? in {5, 7, 9, 11} ? 1/q50 (M )
where q50 (M (:)) is the median distance between pixels. We set the number of fixed-point iterations
to an arbitrary number of 20 iterations. In most (though not all) folds, the value ? = 9 comes up as
the best setting. The dual-Sinkhorn divergence beats by a safe margin all other distances, including
the classic optimal transport distance, here labeled as EMD.
Distribution of (Sinkhorn?EMD)/EMD
Does the Dual-Sinkhorn Divergence Converge to the EMD? We study the converDeviation of Sinkhorn?s Distance
to EMD on subset of MNIST Data
gence of the dual-Sinkhorn divergence towards classic optimal transport distances as
1.4
? grows. Because of the regularization in
1.2
Equation (2), d?M (r, c) is necessarily larger
than dM (r, c), and we expect this gap to de1
crease as ? increases. Figure 3 illustrates
this by plotting the boxplot of the distri0.8
butions of (d?M (r, c) ? dM (r, c))/dM (r, c)
over 402 pairs of images from the MNIST
0.6
database. d?M typically approximates the
0.4
EMD with a high accuracy when ? exceeds
50 (median relative gap of 3.4% and 1.2%
0.2
for 50 and 100 respectively). For this experiment as well as all the other experiments be1
5
9
15
25
50
100
?
low, we compute a vector of N divergences
d at each iteration, and stop when none of
the N values of d varies more in absolute Figure 3: Decrease of the gap between the dualvalue than a 1/100th of a percent i.e. we stop Sinkhorn divergence and the EMD as a function of
? on a subset of the MNIST dataset.
when kdt ./dt?1 ? 1k? < 1e ? 4.
6
Avg. Execution Time per Distance (in s.)
Several Orders of Magnitude Faster
Computational Speed for Histograms of
We measure the computational speed of
Varying Dimension Drawn Uniformly on the Simplex
(log log scale)
classic optimal transport distances vs. that
of dual-Sinkhorn divergences using RubFastEMD
2
Rubner?s emd
10
ner et al.?s (1997) and Pele and WerSink. CPU ?=50
man?s (2009) publicly available impleSink. GPU ?=50
mentations. We pick a random distance
Sink. CPU ?=10
matrix M by generating a random graph
Sink. GPU ?=10
0
10
Sink. CPU ?=1
of d vertices with edge presence probabilSink. GPU ?=1
ity 1/2 and edge weights uniformly distributed between 0 and 1. M is the all?2
pairs shortest-path matrix obtained from
10
this connectivity matrix using the FloydWarshall algorithm (Ahuja et al., 1993,
?5.6). Using this procedure, M is likely
?4
10
to be an extreme ray of the cone M (Avis,
1980, p.138). The elements of M are
then normalized to have unit median. We
64
128
256
512
1024
2048
implemented Algorithm 1 in matlab, and
Histogram Dimension
use emd mex and emd hat gd metric
mex/C files. The EMD distances and Figure 4: Average computational time required to comSinkhorn CPU are run on a single core pute a distance between two histograms sampled uni(2.66 Ghz Xeon). Sinkhorn GPU is run formly in the d dimensional simplex for varying values
on a NVidia Quadro K5000 card. We con- of d. Dual-Sinkhorn divergences are run both on a sinsider ? in {1, 10, 50}. ? = 1 results in gle CPU and on a GPU card.
a relatively dense matrix K, with results
comparable to that of the Independence kernel, while for ? = 10 or 50 K = e??M has very small
values. Rubner et al.?s implementation cannot be run for histograms larger than d = 512. As can be
expected, the competitive advantage of dual-Sinkhorn divergences over EMD solvers increases with
the dimension. Using a GPU results in a speed-up of an additional order of magnitude.
Empirical Complexity To provide an accurate picture of the actual cost of the algorithm,
we replicate the experiments above but focus
now on the number of iterations (matrix-matrix
products) typically needed to obtain the convergence of a set of N divergences from a given
point r, all uniformly sampled on the simplex.
As can be seen in Figure 5, the number of iterations required for vector d to converge increases as e??M becomes diagonally dominant.
However, the total number of iterations does
not seem to vary with respect to the dimension. This observation can explain why we do
observe a quadratic (empirical) time complexity O(d2 ) with respect to the dimension d in
Figure 4 above. These results suggest that the
costly action of keeping track of the actual approximation error (computing variations in d) Figure 5: The influence of ? on the number of
is not required, and that simply predefining a iterations required to converge on histograms unifixed number of iterations can work well and formly sampled from the simplex.
yield even additional speedups.
6 Conclusion
We have shown that regularizing the optimal transport problem with an entropic penalty opens the
door for new numerical approaches to compute OT. This regularization yields speed-ups that are
effective regardless of any assumptions on the ground metric M . Based on preliminary evidence, it
7
seems that dual-Sinkhorn divergences do not perform worse than the EMD, and may in fact perform
better in applications. Dual-Sinkhorn divergences are parameterized by a regularization weight ?
which should be tuned having both computational and performance objectives in mind, but we have
not observed a need to establish a trade-off between both. Indeed, reasonably small values of ? seem
to perform better than large ones.
Acknowledgements The author would like to thank: Zaid Harchaoui for suggesting the title of this
paper and highlighting the connection between the mutual information of P and its Kullback-Leibler
divergence to rcT ; Lieven Vandenberghe, Philip Knight, Sanjeev Arora, Alexandre d?Aspremont
and Shun-Ichi Amari for fruitful discussions; reviewers for anonymous comments.
7 Appendix: Proofs
Proof of Property 1. The set U1 (r, c) contains all joint probabilities P for which h(P ) = h(r) +
h(c). In that case (Cover and Thomas, 1991, Theorem 2.6.6) applies and U1 (r, c) can only be
equal to the singleton {rcT }. If M is negative definite, there exists vectors (?1 , ? ? ? , ?d ) in some
Euclidean space Rn such that mij = k?i ? ?j k22 through (Berg et al., 1984, ?3.3.2). We thus have
that
X
X
X
X
hri ?i , cj ?j i
ci k?i k2 ) ? 2
ri k?i k2 +
ri cj k?i ? ?j k2 = (
rT M c =
i
i
ij
T
T
ij
T
= r u + c u ? 2r Kc
P
P
where ui = k?i k2 and Kij = h?i , ?j i. We used the fact that
ri =
ci = 1 to go from the
first to the second equality. rT M c is thus a n.d. kernel because it is the sum of two n.d. kernels: the
first term (rT u + cT u) is the sum of the same function evaluated separately on r and c, and thus a
negative definite kernel (Berg et al., 1984, ?3.2.10); the latter term ?2rT Ku is negative definite as
minus a positive definite kernel (Berg et al., 1984, Definition ?3.1.1).
Remark. The proof above suggests a faster way to compute the Independence kernel. Given a matrix
M , one can indeed pre-compute the vector of norms u as well as a Cholesky factor L of K above
to preprocess a dataset of histograms by premultiplying each observations ri by L and only store
Lri as well as precomputing its diagonal term riT u. Note that the independence kernel is positive
definite on histograms with the same 1-norm, but is no longer positive definite for arbitrary vectors.
Proof of Lemma 1. Let T be the a probability distribution on {1, ? ? ? , d}3 whose coefficients are
defined as
pij qjk
tijk :=
,
(3)
yj
for all indices j such that yj > 0. For indices j such that yj = 0, all values tijk are set to 0.
P
Let S := [ j tijk ]ik . S is a transport matrix between x and z. Indeed,
X qjk X
X
XX
X X pij qjk
X qjk
=
yj =
qjk = zk (column sums)
sijk =
pij =
yj
yj i
yj
j
j
i
j
j
i
j
X pij
X pij X
X
X X pij qjk
XX
pij = xi (row sums)
qjk =
=
yj =
sijk =
yj
yj
yj
j
j
j
j
j
k
k
k
We now prove that h(S) ? h(x) + h(z) ? ?. Let (X, Y, Z) be three random variables jointly
distributed as T . Since by definition of T in Equation (3)
p(X, Y, Z) = p(X, Y )p(Y, Z)/p(Y ) = p(X)p(Y |X)p(Z|Y ),
the triplet (X, Y, Z) is a Markov chain X ? Y ? Z (Cover and Thomas, 1991, Equation 2.118)
and thus, by virtue of the data processing inequality (Cover and Thomas, 1991, Theorem 2.8.1), the
following inequality between mutual informations applies:
I(X; Y ) ? I(X; Z), namely
h(X, Z) ? h(X) ? h(Z) ? h(X, Y ) ? h(X) ? h(Y ) ? ??.
8
References
Ahuja, R., Magnanti, T., and Orlin, J. (1993). Network Flows: Theory, Algorithms and Applications. Prentice
Hall.
Avis, D. (1980). On the extreme rays of the metric cone. Canadian Journal of Mathematics, 32(1):126?144.
Berg, C., Christensen, J., and Ressel, P. (1984). Harmonic Analysis on Semigroups. Number 100 in Graduate
Texts in Mathematics. Springer Verlag.
Bieling, J., Peschlow, P., and Martini, P. (2010). An efficient gpu implementation of the revised simplex method.
In Parallel Distributed Processing, 2010 IEEE International Symposium on, pages 1?8.
Brickell, J., Dhillon, I., Sra, S., and Tropp, J. (2008). The metric nearness problem. SIAM J. Matrix Anal. Appl,
30(1):375?396.
Brualdi, R. A. (2006). Combinatorial matrix classes, volume 108. Cambridge University Press.
Cover, T. and Thomas, J. (1991). Elements of Information Theory. Wiley & Sons.
Darroch, J. N. and Ratcliff, D. (1972). Generalized iterative scaling for log-linear models. The Annals of
Mathematical Statistics, 43(5):1470?1480.
Erlander, S. and Stewart, N. (1990). The gravity model in transportation analysis: theory and extensions. Vsp.
Ferradans, S., Papadakis, N., Rabin, J., Peyr?e, G., Aujol, J.-F., et al. (2013). Regularized discrete optimal
transport. In International Conference on Scale Space and Variational Methods in Computer Vision, pages
1?12.
Franklin, J. and Lorenz, J. (1989). On the scaling of multidimensional matrices. Linear Algebra and its
applications, 114:717?735.
Good, I. (1963). Maximum entropy for hypothesis formulation, especially for multidimensional contingency
tables. The Annals of Mathematical Statistics, pages 911?934.
Grauman, K. and Darrell, T. (2004). Fast contour matching using approximate earth mover?s distance. In IEEE
Conf. Vision and Patt. Recog., pages 220?227.
Gudmundsson, J., Klein, O., Knauer, C., and Smid, M. (2007). Small manhattan networks and algorithmic
applications for the earth movers distance. In Proceedings of the 23rd European Workshop on Computational
Geometry, pages 174?177.
Indyk, P. and Thaper, N. (2003). Fast image retrieval via embeddings. In 3rd International Workshop on
Statistical and Computational Theories of Vision (at ICCV).
Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev., 106:620?630.
Knight, P. A. (2008). The sinkhorn-knopp algorithm: convergence and applications. SIAM Journal on Matrix
Analysis and Applications, 30(1):261?275.
Ling, H. and Okada, K. (2007). An efficient earth mover?s distance algorithm for robust histogram comparison.
IEEE transactions on Patt. An. and Mach. Intell., pages 840?853.
Naor, A. and Schechtman, G. (2007). Planar earthmover is not in l1 . SIAM J. Comput., 37(3):804?826.
Pele, O. and Werman, M. (2009). Fast and robust earth mover?s distances. In ICCV?09.
Rubner, Y., Guibas, L., and Tomasi, C. (1997). The earth movers distance, multi-dimensional scaling, and
color-based image retrieval. In Proceedings of the ARPA Image Understanding Workshop, pages 661?668.
Shirdhonkar, S. and Jacobs, D. (2008). Approximate earth movers distance in linear time. In CVPR 2008,
pages 1?8. IEEE.
Sinkhorn, R. (1967). Diagonal equivalence to matrices with prescribed row and column sums. The American
Mathematical Monthly, 74(4):402?405.
Villani, C. (2009). Optimal transport: old and new, volume 338. Springer Verlag.
Wilson, A. G. (1969). The use of entropy maximising models, in the theory of trip distribution, mode split and
route split. Journal of Transport Economics and Policy, pages 108?126.
9
| 4927 |@word h:1 version:1 norm:3 replicate:1 villani:4 seems:1 open:1 d2:1 tried:1 jacob:2 p0:1 pick:1 minus:3 boundedness:1 carry:1 celebrated:1 contains:3 selecting:1 tuned:1 ours:1 franklin:3 comparing:1 jaynes:2 yet:1 written:2 gpu:7 numerical:1 cheap:1 zaid:1 update:4 v:2 prohibitive:1 de1:1 core:1 qjk:14 nearness:1 matrix1:1 successive:1 mathematical:3 direct:1 become:3 symposium:1 ik:2 qij:1 prove:4 naor:2 ray:2 polyhedral:1 introduce:2 hellinger:2 magnanti:1 theoretically:1 expected:1 indeed:7 quadro:1 mechanic:1 multi:1 little:1 cpu:6 actual:2 solver:3 considering:2 becomes:2 provided:3 increasing:1 bounded:2 xx:2 what:2 argmin:2 interpreted:1 multidimensional:2 gravity:1 grauman:2 ensured:1 k2:4 unit:1 before:2 positive:7 ner:1 tends:1 limit:1 severely:1 despite:2 mach:1 path:2 quantified:1 equivalence:1 suggests:1 shaded:1 appl:1 projective:1 graduate:2 averaged:1 unique:5 enforces:1 yj:16 practice:1 definite:11 digit:3 procedure:3 empirical:3 axiom:2 matching:2 projection:1 ups:2 pre:1 suggest:2 mkj:1 cannot:3 interior:1 undesirable:1 get:1 prentice:1 influence:1 equivalent:3 deterministic:1 lagrangian:1 maximumentropy:1 transportation:2 fruitful:1 go:2 regardless:2 reviewer:1 economics:1 convex:2 resolution:1 q:1 regularize:2 vandenberghe:1 vsp:1 ity:1 classic:8 proving:1 variation:4 annals:2 target:2 play:1 aujol:1 hypothesis:1 element:6 jk:2 updating:2 particularly:1 labeled:1 database:1 observed:2 cloud:1 recog:1 coincidence:2 solved:2 worst:1 cycle:1 hinders:1 decrease:2 rescaled:1 knight:3 trade:1 mentioned:1 intuition:1 convexity:1 complexity:3 cuturi:1 ui:1 hri:1 tight:1 algebra:1 upon:2 inapplicable:1 basis:1 triangle:3 sink:3 easily:2 joint:11 various:1 represented:1 train:2 fast:3 effective:1 choosing:1 outside:1 whose:3 larger:2 solve:1 cvpr:1 distortion:1 relax:1 drawing:1 amari:1 statistic:2 premultiplying:1 jointly:1 itself:2 indyk:2 advantage:1 propose:4 product:6 relevant:1 intuitive:2 frobenius:1 description:1 kv:1 q10:1 convergence:7 optimum:4 darrell:2 generating:1 ac:1 ij:6 school:1 implemented:2 involves:1 come:2 safe:1 centered:1 bin:2 shun:1 suffices:1 preliminary:1 anonymous:1 strictly:2 extension:1 marco:1 hold:2 around:1 sufficiently:1 considered:1 ground:1 exp:2 hall:1 guibas:1 algorithmic:2 predict:1 mapping:1 werman:3 vary:1 entropic:7 earth:7 combinatorial:1 currently:1 title:1 always:1 gaussian:1 super:1 modified:1 rather:2 avoid:1 varying:4 wilson:2 focus:2 prevalent:1 check:1 ratcliff:2 lri:1 kim:1 sense:1 inference:1 dependent:1 stopping:3 typically:2 predefining:1 kc:1 quasi:1 pixel:2 issue:1 classification:3 among:3 dual:13 favored:1 constrained:2 raised:1 platform:1 mutual:4 field:2 equal:2 having:1 emd:14 brualdi:2 look:1 simplex:11 few:4 simultaneously:2 divergence:19 mover:7 intell:1 cheaper:1 semigroups:1 geometry:3 interest:2 extreme:3 regularizers:1 perimeter:1 chain:2 accurate:1 edge:2 closer:1 euclidean:7 kjl:1 old:1 theoretical:1 gpgpu:2 arpa:1 kij:1 column:3 modeling:1 xeon:1 facet:1 rabin:1 cover:6 stewart:2 cost:10 applicability:1 deviation:1 subset:5 vertex:5 entry:1 hundred:2 kq:1 peyr:1 stored:1 varies:1 combined:1 gd:1 fundamental:1 international:3 siam:3 probabilistic:2 off:1 informatics:1 quickly:1 sanjeev:1 connectivity:1 squared:1 choose:1 worse:1 conf:1 american:1 suggesting:1 embeddability:1 converted:1 singleton:1 coefficient:1 matter:1 satisfy:2 multiplicative:1 view:1 tijk:3 closed:1 traffic:1 competitive:2 recover:2 parallel:1 orlin:1 publicly:1 accuracy:1 characteristic:1 efficiently:1 yield:2 preprocess:1 none:2 thaper:2 worth:1 multiplying:1 explain:1 reach:2 phys:1 whenever:3 definition:4 dm:28 proof:11 con:1 couple:1 stop:2 sampled:3 dataset:5 proved:1 recall:1 reminder:2 color:2 improves:1 organized:1 hilbert:1 cj:2 alexandre:1 dt:1 follow:2 xky:1 planar:1 formulation:2 evaluated:1 though:1 mcuturi:1 until:1 tropp:1 transport:44 mode:1 grows:3 k22:2 normalized:2 multiplier:1 kjm:1 regularization:9 hence:1 vicinity:1 equality:2 symmetric:2 leibler:6 iteratively:1 dhillon:1 mahalanobis:1 uniquely:1 coincides:1 criterion:3 generalized:2 be1:1 syntax:1 butions:1 theoretic:1 l1:1 percent:1 image:5 wise:2 harmonic:1 novel:1 recently:1 variational:1 multinomial:1 kil:1 jp:1 volume:2 interpretation:2 approximates:1 lieven:1 marginals:2 monthly:1 cambridge:1 cv:4 smoothness:1 rd:10 grid:2 mathematics:2 hp:8 dot:2 pute:1 longer:1 sinkhorn:41 dominant:1 perspective:2 belongs:2 driven:1 store:1 certain:1 nvidia:1 verlag:2 inequality:7 tdm:1 route:1 seen:3 minimum:3 additional:2 employed:2 converge:4 maximize:1 shortest:1 monotonically:1 kdt:1 multiple:1 harchaoui:1 kyoto:2 exceeds:3 smooth:2 faster:4 cross:1 long:1 retrieval:3 divided:1 crease:1 sijk:2 papadakis:1 vision:4 metric:13 histogram:14 iteration:14 kernel:15 mex:2 achieved:1 penalize:1 c1:3 separately:1 interval:1 median:3 ot:7 unlike:1 exhibited:1 strict:1 file:1 comment:1 flow:1 schur:1 seem:2 call:2 presence:1 door:1 canadian:1 split:2 embeddings:3 enough:6 ferradans:2 iterate:1 independence:7 architecture:1 lightspeed:1 identified:1 restrict:1 idea:2 cn:3 tm:1 darroch:2 shirdhonkar:2 pele:4 mentation:1 penalty:1 proceed:1 action:1 matlab:2 remark:1 clear:1 detailed:1 gle:1 per:1 track:1 klein:1 patt:2 write:1 discrete:1 ichi:1 key:2 putting:1 threshold:1 drawn:1 d3:2 libsvm:1 graph:2 sum:8 cone:3 run:6 inverse:1 parameterized:1 powerful:1 family:3 appendix:3 scaling:9 summarizes:1 comparable:1 bound:1 ct:1 guaranteed:1 fold:8 quadratic:2 nonnegative:1 constraint:6 ri:6 boxplot:1 dence:1 tag:1 u1:2 speed:7 min:3 concluding:1 prescribed:1 relatively:1 speedup:1 ball:2 son:1 appealing:1 gluing:2 rev:1 modification:1 christensen:1 outlier:1 gradually:1 iccv:2 equation:5 turn:1 needed:2 mind:1 rct:8 end:1 available:1 operation:2 observe:2 minimalistic:1 save:1 slower:1 hat:1 existence:1 original:5 thomas:6 assumes:2 include:1 exploit:1 quantile:1 especially:1 establish:1 approximating:1 q20:1 objective:2 question:1 quantity:3 strategy:3 costly:1 rt:5 diagonal:3 distance:67 thank:1 card:2 gence:1 philip:1 polytope:3 argue:1 ressel:1 reason:3 maximising:1 length:2 index:3 relationship:1 equivalently:1 potentially:1 statement:1 negative:7 implementation:3 anal:1 policy:1 perform:4 upper:1 observation:3 revised:1 markov:1 benchmark:1 beat:1 rn:3 arbitrary:3 intensity:1 pair:5 namely:3 kl:3 required:4 connection:1 trip:1 tomasi:1 unicity:1 beyond:3 below:3 pattern:1 parallelism:1 program:1 built:1 max:3 memory:1 including:1 power:2 suitable:1 overlap:1 regularized:6 turning:1 scheme:1 mjk:4 picture:1 arora:1 carried:2 aspremont:1 knopp:1 text:1 understanding:1 acknowledgement:1 relative:1 manhattan:1 embedded:1 expect:1 validation:1 contingency:1 incurred:1 rubner:4 pij:26 sufficient:3 vectorized:1 principle:1 sik:1 plotting:1 classifying:1 martini:1 heavy:1 row:3 diagonally:1 supported:2 repeat:2 keeping:2 mik:2 avis:3 taking:1 absolute:1 sparse:1 distributed:3 ghz:1 dimension:7 valid:1 stand:1 contour:1 concavity:1 author:1 avg:1 coincide:3 far:1 transaction:1 approximate:3 uni:1 implicitly:1 kullback:6 investigating:1 q50:3 xi:1 search:1 iterative:3 triplet:1 why:2 table:5 ku:1 reasonably:2 okada:2 zk:1 sra:1 robust:2 symmetry:2 excellent:1 complex:2 necessarily:2 european:1 diag:6 precomputing:1 smid:1 main:1 dense:1 motivation:1 ling:2 repeated:1 mul:1 x1:1 cubic:1 ahuja:3 wiley:1 precision:1 exponential:2 comput:1 lie:1 r6:2 theorem:6 xt:1 explored:1 svm:1 virtue:2 evidence:1 exists:3 workshop:3 mnist:7 lorenz:3 adding:2 ci:2 magnitude:4 execution:3 illustrates:1 margin:1 gap:3 suited:1 entropy:13 formly:2 likely:1 simply:1 forming:1 lagrange:1 highlighting:1 applies:2 springer:2 mij:10 corresponds:1 satisfies:2 relies:1 conditional:1 towards:1 price:1 man:1 change:1 uniformly:3 lemma:8 total:3 called:1 duality:1 experimental:1 schechtman:2 ijk:2 select:2 rit:1 berg:5 support:1 cholesky:1 latter:1 relevance:1 ongoing:1 regularizing:1 |
4,340 | 4,928 | Understanding variable importances
in forests of randomized trees
Gilles Louppe, Louis Wehenkel, Antonio Sutera and Pierre Geurts
Dept. of EE & CS, University of Li`ege, Belgium
{g.louppe, l.wehenkel, a.sutera, p.geurts}@ulg.ac.be
Abstract
Despite growing interest and practical use in various scientific areas, variable importances derived from tree-based ensemble methods are not well understood from
a theoretical point of view. In this work we characterize the Mean Decrease Impurity (MDI) variable importances as measured by an ensemble of totally randomized trees in asymptotic sample and ensemble size conditions. We derive a
three-level decomposition of the information jointly provided by all input variables about the output in terms of i) the MDI importance of each input variable, ii)
the degree of interaction of a given input variable with the other input variables,
iii) the different interaction terms of a given degree. We then show that this MDI
importance of a variable is equal to zero if and only if the variable is irrelevant
and that the MDI importance of a relevant variable is invariant with respect to
the removal or the addition of irrelevant variables. We illustrate these properties
on a simple example and discuss how they may change in the case of non-totally
randomized trees such as Random Forests and Extra-Trees.
1
Motivation
An important task in many scientific fields is the prediction of a response variable based on a set
of predictor variables. In many situations though, the aim is not only to make the most accurate
predictions of the response but also to identify which predictor variables are the most important
to make these predictions, e.g. in order to understand the underlying process. Because of their
applicability to a wide range of problems and their capability to both build accurate models and,
at the same time, to provide variable importance measures, Random Forests (Breiman, 2001) and
variants such as Extra-Trees (Geurts et al., 2006) have become a major data analysis tool used with
success in various scientific areas.
Despite their extensive use in applied research, only a couple of works have studied the theoretical
properties and statistical mechanisms of these algorithms. Zhao (2000), Breiman (2004), Biau et al.
(2008); Biau (2012), Meinshausen (2006) and Lin and Jeon (2006) investigated simplified to very
realistic variants of these algorithms and proved the consistency of those variants. Little is known
however regarding the variable importances computed by Random Forests like algorithms, and ?
as far as we know ? the work of Ishwaran (2007) is indeed the only theoretical study of tree-based
variable importance measures.
In this work, we aim at filling this gap and present a theoretical analysis of the Mean Decrease
Impurity importance derived from ensembles of randomized trees. The rest of the paper is organized
as follows: in section 2, we provide the background about ensembles of randomized trees and recall
how variable importances can be derived from them; in section 3, we then derive a characterization
in asymptotic conditions and show how variable importances derived from totally randomized trees
offer a three-level decomposition of the information jointly contained in the input variables about the
output; section 4 shows that this characterization only depends on the relevant variables and section 5
1
discusses theses ideas in the context of variants closer to the Random Forest algorithm; section 6
then illustrates all these ideas on an artificial problem; finally, section 7 includes our conclusions
and proposes directions of future works.
2
Background
In this section, we first describe decision trees, as well as forests of randomized trees. Then, we
describe the two major variable importances measures derived from them ? including the Mean
Decrease Impurity (MDI) importance that we will study in the subsequent sections.
2.1
Single classification and regression trees and random forests
A binary classification (resp. regression) tree (Breiman et al., 1984) is an input-output model
represented by a tree structure T , from a random input vector (X1 , ..., Xp ) taking its values in
X1 ? ... ? Xp = X to a random output variable Y ? Y . Any node t in the tree represents a subset
of the space X , with the root node being X itself. Internal nodes t are labeled with a binary test
(or split) st = (Xm < c) dividing their subset in two subsets1 corresponding to their two children
tL and tR , while the terminal nodes (or leaves) are labeled with a best guess value of the output
variable2 . The predicted output Y? for a new instance is the label of the leaf reached by the instance
when it is propagated through the tree. A tree is built from a learning sample of size N drawn from
P (X1 , ..., Xp , Y ) using a recursive procedure which identifies at each node t the split st = s? for
which the partition of the Nt node samples into tL and tR maximizes the decrease
?i(s, t) = i(t) ? pL i(tL ) ? pR i(tR )
(1)
of some impurity measure i(t) (e.g., the Gini index, the Shannon entropy, or the variance of Y ),
and where pL = NtL /Nt and pR = NtR /Nt . The construction of the tree stops , e.g., when nodes
become pure in terms of Y or when all variables Xi are locally constant.
Single trees typically suffer from high variance, which makes them not competitive in terms of
accuracy. A very efficient and simple way to address this flaw is to use them in the context of
randomization-based ensemble methods. Specifically, the core principle is to introduce random
perturbations into the learning procedure in order to produce several different decision trees from
a single learning set and to use some aggregation technique to combine the predictions of all these
trees. In Bagging (Breiman, 1996), trees are built on random bootstrap copies of the original data,
hence producing different decision trees. In Random Forests (Breiman, 2001), Bagging is extended
and combined with a randomization of the input variables that are used when considering candidate
variables to split internal nodes t. In particular, instead of looking for the best split s? among all
variables, the Random Forest algorithm selects, at each node, a random subset of K variables and
then determines the best split over these latter variables only.
2.2
Variable importances
In the context of ensembles of randomized trees, Breiman (2001, 2002) proposed to evaluate the
importance of a variable Xm for predicting Y by adding up the weighted impurity decreases
p(t)?i(st , t) for all nodes t where Xm is used, averaged over all NT trees in the forest:
X
1 X
Imp(Xm ) =
p(t)?i(st , t)
(2)
NT
T
t?T :v(st )=Xm
and where p(t) is the proportion Nt /N of samples reaching t and v(st ) is the variable used in split
st . When using the Gini index as impurity function, this measure is known as the Gini importance or
Mean Decrease Gini. However, since it can be defined for any impurity measure i(t), we will refer
to Equation 2 as the Mean Decrease Impurity importance (MDI), no matter the impurity measure
i(t). We will characterize and derive results for this measure in the rest of this text.
1
More generally, splits are defined by a (not necessarily binary) partition of the range Xm of possible values
of a single variable Xm .
2
e.g. determined as the majority class j(t) (resp., the average value y?(t)) within the subset of the leaf t.
2
In addition to MDI, Breiman (2001, 2002) also proposed to evaluate the importance of a variable
Xm by measuring the Mean Decrease Accuracy (MDA) of the forest when the values of Xm are
randomly permuted in the out-of-bag samples. For that reason, this latter measure is also known as
the permutation importance.
Thanks to popular machine learning softwares (Breiman, 2002; Liaw and Wiener, 2002; Pedregosa
et al., 2011), both of these variable importance measures have shown their practical utility in an
increasing number of experimental studies. Little is known however regarding their inner workings.
Strobl et al. (2007) compare both MDI and MDA and show experimentally that the former is biased
towards some predictor variables. As explained by White and Liu (1994) in case of single decision
trees, this bias stems from an unfair advantage given by the usual impurity functions i(t) towards
predictors with a large number of values. Strobl et al. (2008) later showed that MDA is biased as
well, and that it overestimates the importance of correlated variables ? although this effect was not
confirmed in a later experimental study by Genuer et al. (2010). From a theoretical point of view,
Ishwaran (2007) provides a detailed theoretical development of a simplified version of MDA, giving
key insights for the understanding of the actual MDA.
3
Variable importances derived from totally randomized tree ensembles
Let us now consider the MDI importance as defined by Equation 2, and let us assume a set V =
{X1 , ..., Xp } of categorical input variables and a categorical output Y . For the sake of simplicity
we will use the Shannon entropy as impurity measure, and focus on totally randomized trees; later
on we will discuss other impurity measures and tree construction algorithms.
Given a training sample L of N joint observations of X1 , ..., Xp , Y independently drawn from the
joint distribution P (X1 , ..., Xp , Y ), let us assume that we infer from it an infinitely large ensemble
of totally randomized and fully developed trees. In this setting, a totally randomized and fully
developed tree is defined as a decision tree in which each node t is partitioned using a variable Xi
picked uniformly at random among those not yet used at the parent nodes of t, and where each t is
split into |Xi | sub-trees, i.e., one for each possible value of Xi , and where the recursive construction
process halts only when all p variables have been used along the current branch. Hence, in such a
tree, leaves are all at the same depth p, and the set of leaves of a fully developed tree is in bijection
with the set X of all possible joint configurations of the p input variables. For example, if the input
variables are all binary, the resulting tree will have exactly 2p leaves.
Theorem 1. The MDI importance of Xm ? V for Y as computed with an infinite ensemble of fully
developed totally randomized trees and an infinitely large training sample is:
Imp(Xm ) =
p?1
X
1
1
Cpk p ? k
k=0
X
I(Xm ; Y |B),
(3)
B?Pk (V ?m )
where V ?m denotes the subset V \ {Xm }, Pk (V ?m ) is the set of subsets of V ?m of cardinality k,
and I(Xm ; Y |B) is the conditional mutual information of Xm and Y given the variables in B.
Proof. See Appendix B.
Theorem 2. For any ensemble of fully developed trees in asymptotic learning sample size conditions
(e.g., in the same conditions as those of Theorem 1), we have that
p
X
Imp(Xm ) = I(X1 , . . . , Xp ; Y ).
(4)
m=1
Proof. See Appendix C.
Together, theorems 1 and 2 show that variable importances derived from totally randomized trees
in asymptotic conditions provide a three-level decomposition of the information I(X1 , . . . , Xp ; Y )
contained in the set of input variables about the output variable. The first level is a decomposition
among input variables (see Equation 4 of Theorem 2), the second level is a decomposition along the
3
degrees k of interaction terms of a variable with the other ones (see the outer sum in Equation 3 of
Theorem 1), and the third level is a decomposition along the combinations B of interaction terms of
fixed size k of possible interacting variables (see the inner sum in Equation 3).
We observe that the decomposition includes, for each variable, each and every interaction term
of each and every degree weighted in a fashion resulting only from the combinatorics of possible
interaction terms. In particular, since all I(Xm ; Y |B) terms are at most equal to H(Y ), the prior
entropy of Y , the p terms of the outer sum of Equation 3 are each upper bounded by
X
1
1
1
1
1
k
H(Y ) = k
Cp?1
H(Y ) = H(Y ).
k
Cp p ? k
C
p
?
k
p
p
?m
B?Pk (V
)
As such, the second level decomposition
resulting from totally randomized trees makes the p subP
1
importance terms C1k p?k
?m ) I(Xm ; Y |B) to equally contribute (at most) to the total
B?P
(V
k
p
importance, even though they each include a combinatorially different number of terms.
4
Importances of relevant and irrelevant variables
Following Kohavi and John (1997), let us define as relevant to Y with respect to V a variable Xm for
which there exists at least one subset B ? V (possibly empty) such that I(Xm ; Y |B) > 0.3 Thus we
define as irrelevant to Y with respect to V a variable Xi for which, for all B ? V , I(Xi ; Y |B) = 0.
Remark that if Xi is irrelevant to Y with respect to V , then by definition it is also irrelevant to Y
with respect to any subset of V .
Theorem 3. Xi ? V is irrelevant to Y with respect to V if and only if its infinite sample size
importance as computed with an infinite ensemble of fully developed totally randomized trees built
on V for Y is 0.
Proof. See Appendix D.
Lemma 4. Let Xi ?
/ V be an irrelevant variable for Y with respect to V . The infinite sample size
importance of Xm ? V as computed with an infinite ensemble of fully developed totally randomized
trees built on V for Y is the same as the importance derived when using V ? {Xi } to build the
ensemble of trees for Y .
Proof. See Appendix E.
Theorem 5. Let VR ? V be the subset of all variables in V that are relevant to Y with respect
to V . The infinite sample size importance of any variable Xm ? VR as computed with an infinite
ensemble of fully developed totally randomized trees built on VR for Y is the same as its importance
computed in the same conditions by using all variables in V . That is:
p?1
X
1
1
Imp(Xm ) =
Cpk p ? k
k=0
r?1
X
1 1
=
Crl r ? l
l=0
X
I(Xm ; Y |B)
B?Pk (V ?m )
X
(5)
I(Xm ; Y |B)
B?Pl (VR?m )
where r is the number of relevant variables in VR .
Proof. See Appendix F.
Theorems 3 and 5 show that the importances computed with an ensemble of totally randomized
trees depends only on the relevant variables. Irrelevant variables have a zero importance and do not
affect the importance of relevant variables. Practically, we believe that such properties are desirable
conditions for a sound criterion assessing the importance of a variable. Indeed, noise should not be
credited of any importance and should not make any other variable more (or less) important.
3
Among the relevant variables, we have the marginally relevant ones, for which I(Xm ; Y ) > 0, the strongly
relevant ones, for which I(Xm ; Y |V ?m ) > 0, and the weakly relevant variables, which are relevant but not
strongly relevant.
4
5
Random Forest variants
In this section, we consider and discuss variable importances as computed with other types of ensembles of randomized trees. We first show how our results extend to any other impurity measure,
and then analyze importances computed by depth-pruned ensemble of randomized trees and those
computed by randomized trees built on random subspaces of fixed size. Finally, we discuss the case
of non-totally randomized trees.
5.1
Generalization to other impurity measures
Although our characterization in sections 3 and 4 uses Shannon entropy as the impurity measure,
we show in Appendix I that theorems 1, 3 and 5 hold for other impurity measures, simply substituting conditional mutual information for conditional impurity reduction in the different formulas
and in the definition of irrelevant variables. In particular, our results thus hold for the Gini index in
classification and can be extended to regression problems using variance as the impurity measure.
5.2
Pruning and random subspaces
In sections 3 and 4, we considered totally randomized trees that were fully developed, i.e. until all
p variables were used within each branch. When totally randomized trees are developed only up to
some smaller depth q ? p, we show in Proposition 6 that the variable importances as computed by
these trees is limited to the q first terms of Equation 3. We then show in Proposition 7 that these
latter importances are actually the same as when each tree of the ensemble is fully developed over a
random subspace (Ho, 1998) of q variables drawn prior to its construction.
Proposition 6. The importance of Xm ? V for Y as computed with an infinite ensemble of pruned
totally randomized trees built up to depth q ? p and an infinitely large training sample is:
Imp(Xm ) =
q?1
X
1
1
k
Cp p ? k
k=0
X
I(Xm ; Y |B)
(6)
B?Pk (V ?m )
Proof. See Appendix G.
Proposition 7. The importance of Xm ? V for Y as computed with an infinite ensemble of pruned
totally randomized trees built up to depth q ? p and an infinitely large training sample is identical
to the importance as computed for Y with an infinite ensemble of fully developed totally randomized
trees built on random subspaces of q variables drawn from V .
Proof. See Appendix H.
As long as q ? r (where r denotes the number of relevant variables in V ), it can easily be shown
that all relevant variables will still obtain a strictly positive importance, which will however differ
in general from the importances computed by fully grown totally randomized trees built over all
variables. Also, each irrelevant variable of course keeps an importance equal to zero, which means
that, in asymptotic conditions, these pruning and random subspace methods would still allow us
identify the relevant variables, as long as we have a good upper bound q on r.
5.3
Non-totally randomized trees
In our analysis in the previous sections, trees are built totally at random and hence do not directly
relate to those built in Random Forests (Breiman, 2001) or in Extra-Trees (Geurts et al., 2006). To
better understand the importances as computed by those algorithms, let us consider a close variant
of totally randomized trees: at each node t, let us instead draw uniformly at random 1 ? K ? p
variables and let us choose the one that maximizes ?i(t). Notice that, for K = 1, this procedure
amounts to building ensembles of totally randomized trees as defined before, while, for K = p, it
amounts to building classical single trees in a deterministic way.
First, the importance of Xm ? V as computed with an infinite ensemble of such randomized trees
is not the same as Equation 3. For K > 1, masking effects indeed appear: at t, some variables are
5
never selected because there always is some other variable for which ?i(t) is larger. Such effects
tend to pull the best variables at the top of the trees and to push the others at the leaves. As a result,
the importance of a variable no longer decomposes into a sum including all I(Xm ; Y |B) terms.
The importance of the best variables decomposes into a sum of their mutual information alone or
conditioned only with the best others ? but not conditioned with all variables since they no longer
ever appear at the bottom of trees. By contrast, the importance of the least promising variables
now decomposes into a sum of their mutual information conditioned only with all variables ? but
not alone or conditioned with a couple of others since they no longer ever appear at the top of
trees. In other words, because of the guided structure of the trees, the importance of Xm now takes
into account only some of the conditioning sets B, which may over- or underestimate its overall
relevance.
To make things clearer, let us consider a simple example. Let X1 perfectly explains Y and let X2 be
a slightly noisy copy of X1 (i.e., I(X1 ; Y ) ? I(X2 ; Y ), I(X1 ; Y |X2 ) = and I(X2 ; Y |X1 ) = 0).
Using totally randomized trees, the importances of X1 and X2 are nearly equal ? the importance of
X1 being slightly higher than the importance of X2 :
1
1
1
Imp(X1 ) =
I(X1 ; Y ) + I(X1 ; Y |X2 ) = I(X1 ; Y ) +
2
2
2
2
1
1
1
Imp(X2 ) =
I(X2 ; Y ) + I(X2 ; Y |X1 ) = I(X2 ; Y ) + 0
2
2
2
In non-totally randomized trees, for K = 2, X1 is always selected at the root node and X2 is
always used in its children. Also, since X1 perfectly explains Y , all its children are pure and the
reduction of entropy when splitting on X2 is null. As a result, ImpK=2 (X1 ) = I(X1 ; Y ) and
ImpK=2 (X2 ) = I(X2 ; Y |X1 ) = 0. Masking effects are here clearly visible: the true importance
of X2 is masked by X1 as if X2 were irrelevant, while it is only a bit less informative than X1 .
As a direct consequence of the example above, for K > 1, it is no longer true that a variable is
irrelevant if and only if its importance is zero. In the same way, it can also be shown that the
importances become dependent on the number of irrelevant variables. Let us indeed consider the
following counter-example: let us add in the previous example an irrelevant variable Xi with respect
to {X1 , X2 } and let us keep K = 2. The probability of selecting X2 at the root node now becomes
positive, which means that ImpK=2 (X2 ) now includes I(X2 ; Y ) > 0 and is therefore strictly larger
than the importance computed before. For K fixed, adding irrelevant variables dampens masking
effects, which thereby makes importances indirectly dependent on the number of irrelevant variables.
In conclusion, the importances as computed with totally randomized trees exhibit properties that do
not possess, by extension, neither random forests nor extra-trees. With totally randomized trees, the
importance of Xm only depends on the relevant variables and is 0 if and only if Xm is irrelevant.
As we have shown, it may no longer be the case for K > 1. Asymptotically, the use of totally
randomized trees for assessing the importance of a variable may therefore be more appropriate. In
a finite setting (i.e., a limited number of samples and a limited number of trees), guiding the choice
of the splitting variables remains however a sound strategy. In such a case, I(Xm ; Y |B) cannot be
measured neither for all Xm nor for all B. It is therefore pragmatic to promote those that look the
most promising ? even if the resulting importances may be biased.
6
Illustration on a digit recognition problem
In this section, we consider the digit recognition problem of (Breiman et al., 1984) for illustrating
variable importances as computed with totally randomized trees. We verify that they match with our
theoretical developments and that they decompose as foretold. We also compare these importances
with those computed by an ensemble of non-totally randomized trees, as discussed in section 5.3,
for increasing values of K.
Let us consider a seven-segment indicator displaying numerals using horizontal and vertical lights
in on-off combinations, as illustrated in Figure 1. Let Y be a random variable taking its value in
{0, 1, ..., 9} with equal probability and let X1 , ..., X7 be binary variables whose values are each
determined univocally given the corresponding value of Y in Table 1.
Since Table 1 perfectly defines the data distribution, and given the small dimensionality of the problem, it is practicable to directly apply Equation 3 to compute variable importances. To verify our
6
X1
X2
y
0
1
2
3
4
5
6
7
8
9
X3
X4
X5
X6
X7
Eqn. 3
0.412
0.581
0.531
0.542
0.656
0.225
0.372
3.321
K=1
0.414
0.583
0.532
0.543
0.658
0.221
0.368
3.321
x2
1
0
0
0
1
1
1
0
1
1
x3
1
1
1
1
1
0
0
1
1
1
x4
0
0
1
1
1
1
1
0
1
1
x5
1
0
1
0
0
0
1
0
1
0
x6
1
1
0
1
1
1
1
1
1
1
x7
1
0
1
1
0
1
1
0
1
1
Table 1: Values of Y, X1 , ..., X7
Figure 1: 7-segment display
X1
X2
X3
X4
X5
X6
X
P7
x1
1
0
1
1
0
1
1
1
1
1
K=2
0.362
0.663
0.512
0.525
0.731
0.140
0.385
3.321
K=3
0.327
0.715
0.496
0.484
0.778
0.126
0.392
3.321
K=4
0.309
0.757
0.489
0.445
0.810
0.122
0.387
3.321
K=5
0.304
0.787
0.483
0.414
0.827
0.122
0.382
3.321
K=6
0.305
0.801
0.475
0.409
0.831
0.121
0.375
3.321
K=7
0.306
0.799
0.475
0.412
0.835
0.120
0.372
3.321
Table 2: Variable importances as computed with an ensemble of randomized trees, for increasing values of K.
Importances at K = 1 follow their theoretical values, as predicted by Equation 3 in Theorem 1. However, as K
increases, importances diverge due to masking effects. In accordance with Theorem 2, their sum is also always
equal to I(X1 , . . . , X7 ; Y ) = H(Y ) = log2 (10) = 3.321 since inputs allow to perfectly predict the output.
theoretical developments, we then compare in Table 2 variable importances as computed by Equation 3 and those yielded by an ensemble of 10000 totally randomized trees (K = 1). Note that
given the known structure of the problem, building trees on a sample of finite size that perfectly
follows the data distribution amounts to building them on a sample of infinite size. At best, trees
can thus be built on a 10-sample dataset, containing exactly one sample for each of the equiprobable
outcomes of Y . As the table illustrates, the importances yielded by totally randomized trees match
those computed by Equation 3, which confirms Theorem 1. Small differences stem from the fact
that a finite number of trees were built in our simulations, but those discrepancies should disappear
as the size of the ensemble grows towards infinity. It also shows that importances indeed add up to
I(X1 , ...X7 ; Y ), which confirms Theorem 2. Regarding the actual importances, they indicate that
X5 is stronger than all others, followed ? in that order ? by X2 , X4 and X3 which also show large
importances. X1 , X7 and X6 appear to be the less informative. The table also reports importances
for increasing values of K. As discussed before, we see that a large value of K yields importances
that can be either overestimated (e.g., at K = 7, the importances of X2 and X5 are larger than at
K = 1) or underestimated due to masking effects (e.g., at K = 7, the importances of X1 , X3 , X4
and X6 are smaller than at K = 1, as if they were less important). It can also be observed that
masking effects may even induce changes in the variable rankings (e.g., compare the rankings at
K = 1 and at K = 7), which thus confirms that importances are differently affected.
To better understand why a variable is important, it is also insightful to look at its decomposition into
its p sub-importances terms, as shown in Figure 2. Each row in the plots of the figure corresponds
to one the p = 7 variables and each column to a size k of conditioning sets. As such, the value at
row m and column k corresponds
the importance of Xm when conditioned with k other variables
P
1
(e.g., to the term C1k p?k
?m ) I(Xm ; Y |B) in Equation 3 in the case of totally randomized
B?P
(V
k
p
trees). In the left plot, for K = 1, the figure first illustrates how importances yielded by totally
randomized trees decomposes along the degrees k of interactions terms. We can observe that they
each equally contribute (at most) the total importance of a variable. The plot also illustrates why
X5 is important: it is informative either alone or conditioned with any combination of the other
variables (all of its terms are significantly larger than 0). By contrast, it also clearly shows why
7
K=1
0.5
K=7
X1
X1
X2
X2
X3
X3
X4
X4
X5
X5
X6
X6
X7
X7
0
1
2
3
4
5
6
0.375
0.25
0.125
0
1
2
3
4
5
6
0.0
Figure 2: Decomposition of variable importances along the degrees k of interactions of one variable with the
other ones. At K = 1, all I(Xm ; Y |B) are accounted for in the total importance, while at K = 7 only some
of them are taken into account due to masking effects.
X6 is not important: neither alone nor combined with others X6 seems to be very informative
(all of its terms are close to 0). More interestingly, this figure also highlights redundancies: X7
is informative alone or conditioned with a couple of others (the first terms are significantly larger
than 0), but becomes uninformative when conditioned with many others (the last terms are closer
to 0). The right plot, for K = 7, illustrates the decomposition of importances when variables are
chosen in a deterministic way. The first thing to notice is masking effects. Some of the I(Xm ; Y |B)
terms are indeed clearly never encountered and their contribution is therefore reduced to 0 in the
total importance. For instance, for k = 0, the sub-importances of X2 and X5 are positive, while
all others are null, which means that only those two variables are ever selected at the root node,
hence masking the others. As a consequence, this also means that the importances of the remaining
variables is biased and that it actually only accounts of their relevance when conditioned to X2
or X5 , but not of their relevance in other contexts. At k = 0, masking effects also amplify the
contribution of I(X2 ; Y ) (resp. I(X5 ; Y )) since X2 (resp. X5 ) appears more frequently at the root
node than in totally randomized trees. In addition, because nodes become pure before reaching
depth p, conditioning sets of size k ? 4 are never actually encountered, which means that we can no
longer know whether variables are still informative when conditioned to many others. All in all, this
figure thus indeed confirms that importances as computed with non-totally randomized trees take
into account only some of the conditioning sets B, hence biasing the measured importances.
7
Conclusions
In this work, we made a first step towards understanding variable importances as computed with
a forest of randomized trees. In particular, we derived a theoretical characterization of the Mean
Decrease Impurity importances as computed by totally randomized trees in asymptotic conditions.
We showed that they offer a three-level decomposition of the information jointly provided by all
input variables about the output (Section 3). We then demonstrated (Section 4) that MDI importances
as computed by totally randomized trees exhibit desirable properties for assessing the relevance of
a variable: it is equal to zero if and only if the variable is irrelevant and it depends only on the
relevant variables. We discussed the case of Random Forests and Extra-Trees (Section 5) and finally
illustrated our developments on an artificial but insightful example (Section 6).
There remain several limitations to our framework that we would like address in the future. First, our
results should be adapted to binary splits as used within an actual Random Forest-like algorithm. In
this setting, any node t is split in only two subsets, which means that any variable may then appear
one or several times within a branch, and thus should make variable importances now dependent on
the cardinalities of the input variables. In the same direction, our framework should also be extended
to the case of continuous variables. Finally, results presented in this work are valid in an asymptotic
setting only. An important direction of future work includes the characterization of the distribution
of variable importances in a finite setting.
Acknowledgements. Gilles Louppe is a research fellow of the FNRS (Belgium) and acknowledges its financial
support. This work is supported by PASCAL2 and the IUAP DYSCO, initiated by the Belgian State, Science
Policy Office.
8
References
Biau, G. (2012). Analysis of a random forests model. The Journal of Machine Learning Research,
98888:1063?1095.
Biau, G., Devroye, L., and Lugosi, G. (2008). Consistency of random forests and other averaging
classifiers. The Journal of Machine Learning Research, 9:2015?2033.
Breiman, L. (1996). Bagging predictors. Machine learning, 24(2):123?140.
Breiman, L. (2001). Random forests. Machine learning, 45(1):5?32.
Breiman, L. (2002). Manual on setting up, using, and understanding random forests v3. 1. Statistics
Department University of California Berkeley, CA, USA.
Breiman, L. (2004). Consistency for a simple model of random forests. Technical report, UC
Berkeley.
Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and regression
trees.
Genuer, R., Poggi, J.-M., and Tuleau-Malot, C. (2010). Variable selection using random forests.
Pattern Recognition Letters, 31(14):2225?2236.
Geurts, P., Ernst, D., and Wehenkel, L. (2006). Extremely randomized trees. Machine Learning,
63(1):3?42.
Ho, T. (1998). The random subspace method for constructing decision forests. Pattern Analysis and
Machine Intelligence, IEEE Transactions on, 20(8):832?844.
Ishwaran, H. (2007). Variable importance in binary regression trees and forests. Electronic Journal
of Statistics, 1:519?537.
Kohavi, R. and John, G. H. (1997). Wrappers for feature subset selection. Artificial intelligence,
97(1):273?324.
Liaw, A. and Wiener, M. (2002). Classification and regression by randomforest. R news, 2(3):18?22.
Lin, Y. and Jeon, Y. (2006). Random forests and adaptive nearest neighbors. Journal of the American
Statistical Association, 101(474):578?590.
Meinshausen, N. (2006). Quantile regression forests. The Journal of Machine Learning Research,
7:983?999.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al. (2011). Scikit-learn: Machine learning in python. The
Journal of Machine Learning Research, 12:2825?2830.
Strobl, C., Boulesteix, A.-L., Kneib, T., Augustin, T., and Zeileis, A. (2008). Conditional variable
importance for random forests. BMC bioinformatics, 9(1):307.
Strobl, C., Boulesteix, A.-L., Zeileis, A., and Hothorn, T. (2007). Bias in random forest variable
importance measures: Illustrations, sources and a solution. BMC bioinformatics, 8(1):25.
White, A. P. and Liu, W. Z. (1994). Technical note: Bias in information-based measures in decision
tree induction. Machine Learning, 15(3):321?329.
Zhao, G. (2000). A new perspective on classification. PhD thesis, Utah State University, Department
of Mathematics and Statistics.
9
| 4928 |@word illustrating:1 version:1 seems:1 proportion:1 stronger:1 confirms:4 simulation:1 decomposition:12 thereby:1 tr:3 reduction:2 wrapper:1 liu:2 configuration:1 selecting:1 interestingly:1 dubourg:1 current:1 nt:6 yet:1 john:2 realistic:1 subsequent:1 partition:2 visible:1 informative:6 plot:4 alone:5 intelligence:2 leaf:7 guess:1 selected:3 p7:1 core:1 characterization:5 provides:1 node:19 bijection:1 contribute:2 along:5 direct:1 become:4 sutera:2 combine:1 introduce:1 blondel:1 indeed:7 nor:3 growing:1 frequently:1 terminal:1 little:2 actual:3 considering:1 totally:40 increasing:4 provided:2 cardinality:2 underlying:1 bounded:1 maximizes:2 becomes:2 null:2 developed:12 fellow:1 every:2 berkeley:2 exactly:2 classifier:1 appear:5 louis:1 producing:1 overestimate:1 positive:3 before:4 understood:1 accordance:1 consequence:2 despite:2 initiated:1 credited:1 lugosi:1 studied:1 meinshausen:2 limited:3 range:2 averaged:1 practical:2 recursive:2 x3:7 bootstrap:1 digit:2 procedure:3 area:2 significantly:2 word:1 induce:1 cannot:1 close:2 amplify:1 selection:2 context:4 deterministic:2 demonstrated:1 independently:1 simplicity:1 splitting:2 pure:3 insight:1 pull:1 financial:1 resp:4 construction:4 us:1 recognition:3 labeled:2 bottom:1 louppe:3 observed:1 news:1 decrease:9 counter:1 weakly:1 segment:2 impurity:19 easily:1 joint:3 differently:1 various:2 represented:1 grown:1 describe:2 fnrs:1 artificial:3 gini:5 outcome:1 whose:1 larger:5 statistic:3 jointly:3 itself:1 noisy:1 advantage:1 interaction:8 relevant:19 ernst:1 parent:1 empty:1 assessing:3 produce:1 derive:3 illustrate:1 ac:1 clearer:1 measured:3 nearest:1 dividing:1 c:1 predicted:2 indicate:1 differ:1 direction:3 guided:1 numeral:1 explains:2 generalization:1 decompose:1 randomization:2 proposition:4 varoquaux:1 strictly:2 pl:3 extension:1 hold:2 practically:1 considered:1 predict:1 substituting:1 major:2 belgium:2 bag:1 label:1 prettenhofer:1 augustin:1 combinatorially:1 tool:1 weighted:2 clearly:3 always:4 aim:2 reaching:2 breiman:15 office:1 derived:9 focus:1 grisel:1 contrast:2 flaw:1 dependent:3 typically:1 selects:1 overall:1 classification:6 among:4 proposes:1 development:4 gramfort:1 mutual:4 uc:1 equal:7 field:1 never:3 identical:1 represents:1 x4:7 look:2 bmc:2 filling:1 imp:7 nearly:1 promote:1 future:3 discrepancy:1 others:10 report:2 equiprobable:1 randomly:1 jeon:2 friedman:1 interest:1 light:1 accurate:2 closer:2 belgian:1 tree:96 theoretical:10 instance:3 column:2 cpk:2 measuring:1 applicability:1 subset:11 predictor:5 masked:1 ulg:1 characterize:2 combined:2 st:7 thanks:1 randomized:51 overestimated:1 off:1 diverge:1 together:1 thesis:2 containing:1 choose:1 possibly:1 american:1 zhao:2 michel:1 li:1 account:4 includes:4 matter:1 combinatorics:1 ranking:2 depends:4 later:3 view:2 root:5 picked:1 analyze:1 reached:1 competitive:1 aggregation:1 capability:1 masking:10 contribution:2 accuracy:2 wiener:2 variance:3 ensemble:28 yield:1 identify:2 biau:4 marginally:1 confirmed:1 manual:1 definition:2 underestimate:1 proof:7 couple:3 propagated:1 stop:1 proved:1 dataset:1 popular:1 recall:1 dimensionality:1 organized:1 actually:3 appears:1 higher:1 x6:9 follow:1 response:2 wei:1 though:2 strongly:2 until:1 working:1 eqn:1 horizontal:1 scikit:1 defines:1 scientific:3 grows:1 believe:1 usa:1 effect:11 building:4 verify:2 true:2 utah:1 former:1 hence:5 illustrated:2 white:2 x5:12 liaw:2 criterion:1 stone:1 geurts:5 cp:3 permuted:1 conditioning:4 extend:1 discussed:3 association:1 refer:1 consistency:3 mathematics:1 longer:6 add:2 showed:2 perspective:1 irrelevant:19 binary:7 success:1 ege:1 v3:1 ii:1 branch:3 desirable:2 ntr:1 infer:1 stem:2 sound:2 technical:2 match:2 offer:2 long:2 lin:2 dept:1 equally:2 halt:1 prediction:4 variant:6 regression:7 addition:3 background:2 uninformative:1 underestimated:1 source:1 kohavi:2 extra:5 rest:2 biased:4 posse:1 tend:1 thing:2 ee:1 iii:1 split:10 affect:1 perfectly:5 inner:2 regarding:3 idea:2 randomforest:1 whether:1 c1k:2 utility:1 suffer:1 remark:1 antonio:1 generally:1 detailed:1 amount:3 locally:1 reduced:1 notice:2 affected:1 key:1 redundancy:1 drawn:4 neither:3 asymptotically:1 sum:7 letter:1 dampens:1 electronic:1 draw:1 decision:7 appendix:8 bit:1 bound:1 followed:1 display:1 encountered:2 yielded:3 mda:5 adapted:1 infinity:1 x2:32 software:1 sake:1 x7:10 extremely:1 pruned:3 department:2 combination:3 smaller:2 slightly:2 remain:1 partitioned:1 practicable:1 explained:1 invariant:1 pr:2 taken:1 equation:13 ntl:1 remains:1 discus:5 mechanism:1 thirion:1 know:2 ishwaran:3 observe:2 apply:1 indirectly:1 appropriate:1 pierre:1 ho:2 original:1 bagging:3 denotes:2 top:2 include:1 remaining:1 wehenkel:3 log2:1 mdi:11 giving:1 quantile:1 build:2 disappear:1 classical:1 strobl:4 strategy:1 usual:1 exhibit:2 subspace:6 majority:1 outer:2 seven:1 reason:1 induction:1 devroye:1 index:3 illustration:2 olshen:1 relate:1 policy:1 gilles:2 upper:2 vertical:1 observation:1 finite:4 situation:1 extended:3 looking:1 ever:3 interacting:1 perturbation:1 zeileis:2 extensive:1 california:1 address:2 pattern:2 xm:42 biasing:1 dysco:1 built:14 including:2 pascal2:1 predicting:1 indicator:1 identifies:1 acknowledges:1 categorical:2 text:1 prior:2 understanding:4 acknowledgement:1 removal:1 python:1 asymptotic:7 fully:12 permutation:1 highlight:1 limitation:1 univocally:1 degree:6 xp:8 principle:1 displaying:1 row:2 course:1 accounted:1 supported:1 last:1 copy:2 bias:3 allow:2 understand:3 wide:1 neighbor:1 taking:2 depth:6 valid:1 made:1 adaptive:1 simplified:2 far:1 transaction:1 pruning:2 keep:2 xi:11 continuous:1 decomposes:4 why:3 table:7 impk:3 promising:2 learn:1 hothorn:1 ca:1 forest:29 investigated:1 necessarily:1 constructing:1 pk:5 motivation:1 noise:1 child:3 x1:39 tl:3 fashion:1 vr:5 sub:3 guiding:1 iuap:1 candidate:1 unfair:1 third:1 theorem:14 formula:1 insightful:2 exists:1 adding:2 importance:103 phd:1 illustrates:5 push:1 conditioned:10 gap:1 entropy:5 simply:1 infinitely:4 contained:2 corresponds:2 determines:1 conditional:4 towards:4 crl:1 change:2 experimentally:1 genuer:2 specifically:1 determined:2 uniformly:2 infinite:12 averaging:1 lemma:1 total:4 experimental:2 shannon:3 pedregosa:2 pragmatic:1 internal:2 support:1 latter:3 relevance:4 bioinformatics:2 evaluate:2 correlated:1 |
4,341 | 4,929 | Learning and using language via recursive pragmatic
reasoning about other agents
Nathaniel J. Smith?
University of Edinburgh
Noah D. Goodman
Stanford University
Michael C. Frank
Stanford University
Abstract
Language users are remarkably good at making inferences about speakers? intentions in context, and children learning their native language also display substantial skill in acquiring the meanings of unknown words. These two cases are deeply
related: Language users invent new terms in conversation, and language learners
learn the literal meanings of words based on their pragmatic inferences about how
those words are used. While pragmatic inference and word learning have both
been independently characterized in probabilistic terms, no current work unifies
these two. We describe a model in which language learners assume that they
jointly approximate a shared, external lexicon and reason recursively about the
goals of others in using this lexicon. This model captures phenomena in word
learning and pragmatic inference; it additionally leads to insights about the emergence of communicative systems in conversation and the mechanisms by which
pragmatic inferences become incorporated into word meanings.
1
Introduction
Two puzzles present themselves to language users: What do words mean in general, and what do
they mean in context? Consider the utterances ?it?s raining,? ?I ate some of the cookies,? or ?can
you close the window?? In each, a listener must go beyond the literal meaning of the words to
fill in contextual details (?it?s raining here and now?), infer that a stronger alternative is not true
(?I ate some but not all of the cookies?), or more generally infer the speaker?s communicative goal
(?I want you to close the window right now because I?m cold?), a process known as pragmatic
reasoning. Theories of pragmatics frame the process of language comprehension as inference about
the generating goal of an utterance given a rational speaker [14, 8, 9]. For example, a listener might
reason, ?if she had wanted me to think ?all? of the cookies, she would have said ?all??but she didn?t.
Hence ?all? must not be true and she must have eaten some but not all of the cookies.? This kind of
reasoning is core to language use.
But pragmatic reasoning about meaning-in-context relies on stable literal meanings that must themselves be learned. In both adults and children, uncertainty about word meanings is common, and
often considering speakers? pragmatic goals can help to resolve this uncertainty. For example, if a
novel word is used in a context containing both a novel and a familiar object, young children can
make the inference that the novel word refers to the novel object [22].1 For adults who are proficient language users, there are also a variety of intriguing cases in which listeners seem to create
situation- and task-specific ways of referring to particular objects. For example, when asked to refer
to idiosyncratic geometric shapes, over the course of an experimental session, participants create
conventionalized descriptions that allow them to perform accurately even though they do not begin
with shared labels [19, 7]. In both of these examples, reasoning about another person?s goals informs
?
[email protected]
Very young children make inferences that are often labeled as ?pragmatic? in that they involve reasoning
about context [6, 1], though in some cases they are systematically ?too literal? (e.g. failing to strengthen SOME
to SOME - BUT- NOT- ALL [23]). Here we remain agnostic about the age at which children are able to make such
inferences robustly, as it may vary depending on the linguistic materials being used in the inference [2].
1
1
language learners? estimates of what words are likely to mean.
Despite this intersection, there is relatively little work that takes pragmatic reasoning into account
when considering language learning in context. Recent work on grounded language learning has
attempted to learn large sets of (sometimes relatively complex) word meanings from noisy and ambiguous input (e.g. [10, 17, 20]). And a number of models have begun to formalize the consequences
of pragmatic reasoning in situations where limited learning takes place [12, 9, 3, 13]. But as yet
these two strands of research have not been brought together so that the implications of pragmatics
for learning can be investigated directly.
The goal of our current work is to investigate the possibilities for integrating models of recursive
pragmatic reasoning with models of language learning, with the hope of capturing phenomena in
both domains. We begin by describing a proposal for bringing the two together, noting several
issues in previous approaches based on recursive reasoning under uncertainty. We next simulate
findings on pragmatic inference in one-shot games (replicating previous work). We then build on
these results to simulate the results of pragmatic learning in the language acquisition setting where
one communicator is uncertain about the lexicon and in iterated communication games where both
communicators are uncertain about the lexicon.
2
Model
We model a standard communication game [19, 7]: two participants each, separately, view identical
arrays of objects. On the Speaker?s screen, one object is highlighted; their goal is to get the Listener
to click on this item. To do this, they have available a fixed, finite set of words; they must pick one.
The Listener then receives this word, and attempts to guess which object the Speaker meant by it.
In the psychology literature, as in real-world interactions, games are typically iterated; one view of
our contribution here is as a generalization of one-shot models [9, 3] to the iterated context.
2.1 Paradoxes in optimal models of pragmatic learning. Multi-agent interactions are difficult
to model in a normative or optimal framework without falling prey to paradox. Consider a simple
model of the agents in the above game. First we define a literal listener L0 . This agent has a
lexicon of associations between words and meanings; specifically, it assigns each word w a vector
of numbers in (0, 1) describing the extent to which this word provides evidence for each possible
object2 .To interpret a word, the literal listener simply re-weights their prior expectation about what
is referred to using their lexicon?s entry for this word:
PL0 (object|word, lexicon) ? lexicon(word, object) ? Pprior (object).
(1)
Because of the normalization in this equation, there is a systematic but unimportant symmetry among
lexicons; we remove this by assuming the lexicon sums to 1 over objects for each word. Confronted with such a listener, a speaker who chooses approximately optimal actions should attempt
to choose a word which soft-maximizes the probability that the listener will assign to the target
object?modulated by the effort or cost associated with producing this word:
PS1 (word|object, lexicon) ? exp ? log PL0 (object|word, lexicon) ? cost(word) .
(2)
But given this speaker, then the naive L0 strategy is not optimal. Instead, listeners should use Bayes
rule to invert the speaker?s decision procedure [9]:
PL2 (object|word, lexicon) ? PS1 (word|object, lexicon) ? Pprior (object).
(3)
Now a difficulty becomes apparent. Given such a listener, it is no longer optimal for speakers
to implement strategy S1 ; instead, they should implement strategy S3 which soft-maximizes PL2
instead of PL0 . And then listeners ought to implement L4 , and so on.
One option is to continue iterating such strategies until reaching a fixed point equilibrium. While this
strategy guarantees that each agent will behave normatively given the other agent?s strategy, there
is no guarantee that such strategies will be near the system?s global optimum. More importantly,
2
We assume words refer directly to objects, rather than to abstract semantic features. Our simplification
is without loss of generalization, however, because wePcan interpret our model as marginalizing over such a
representation, with our literal Plexicon (object|word) = features P (object|features)Plexicon (features|word).
2
there is a great deal of evidence that humans do not use such equilibrium strategies; their behavior in
language games (and in other games [5]) can be well-modeled as implementing Sk or Lk for some
small k [9]. Following this work, we recurse a finite (small) number of times, n. The consequence
is that one agent, implementing Sn , is fully optimal with respect to the game, while the other,
implementing Ln?1 , is only nearly optimal?off by a single recursion.
This resolves one problem, but as soon as we attempt to add uncertainty about the meanings of words
to such a model, a new paradox arises. Suppose the listener is a young child who is uncertain about
the lexicon their partner is using. The obvious solution is for them to place a prior on the lexicon;
they then update their posterior based on whatever utterances and contextual cues they observe,
and in the mean time interpret each utterance by making their best guess, marginalizing out this
uncertainty. This basic structure is captured in previous models of Bayesian word learning [10]. But
when combined with the recursive pragmatic model, a new question arises: Given such a listener,
what model should the speaker use? A rational speaker attempts to maximize the listener?s likelihood
of understanding, so if an uncertain listener interpets by marginalizing over some posterior, then a
fully knowledgeable speaker should disregard their own lexical knowledge, and instead model and
marginalize over the listener?s uncertainty. But if they do this, then their utterances will provide no
data about their lexicon, and there is nothing for the rational listener to learn from observing them.3
One final problem is that under this model, when agents switch roles between listener and speaker,
there is nothing constraining them to continue using the same language. Optimizing task performance requires my lexicon as a speaker to match your lexicon as a listener and vice-versa, but there
is nothing that relates my lexicon as a speaker to my lexicon as a listener, because these never interact. This clearly represents a dramatic mismatch to typical human communication, which almost
never proceeds with distinct languages spoken by each participant.
2.2 A conventionality-based model of pragmatic word learning. We resolve the problems described above by assuming that speakers and listeners deviate from normative behavior by assuming
a conventional lexicon. Specifically, our final convention-based agents assume: (a) There is some
single, specific literal lexicon which everyone should be using, (b) and everyone else knows this
lexicon, and believes that I know it as well, (c) but in fact I don?t. These assumptions instantiate a
kind of ?social anxiety? in which agents are all trying to learn the correct lexicon that they assume
everyone else knows.
Assumption (a) corresponds to the lexicographer?s illusion: Naive language users will argue vociferously that words have specific meanings, even though these meanings are unobservable to everyone
who purportedly uses them. It also explains why learners speak the language they hear (rather than
some private language that they assume listeners will eventually learn): Under assumption (a), observing other speakers? behavior provides data about not just that speaker?s idiosyncratic lexicon,
but the consensus lexicon. Assumption (b) avoids the explosion of hypern -distributions described
above: If agent n knows the lexicon, they assume that all lower agents do as well, reducing to the
original tractable model without uncertainty. And assumption (c) introduces a limited form of uncertainty at the top level, and thus the potential for learning. To the extent that a child?s interlocutors
do use a stable lexicon and do not fully adapt their speech to accomodate the child?s limitations,
these assumptions make a reasonable approximation for the child language learning case. In general, though, in arbitrary multi-turn interactions in which both agents have non-trivial uncertainty,
these assumptions are incorrect, and thus induce complex and non-normative learning dynamics.
Formally, let an unadorned L and S denote the listener and speaker who follow the above assumptions. If the lexicon were known then the listener would draw inferences as in Ln?1 above; but by
assumption (c), they have uncertainty, which they marginalize out:
Z
PL (object|word, L?s data) = PLn?1 (object|word, lexicon)P (lexicon|L?s data) d(lexicon) (4)
3
Of course, in reality both parties will generally have some uncertainty, making the situation even worse. If
we start from an uncertain listener with a prior over lexicons, then a first-level uncertain speaker needs a prior
over priors on lexicons, a second-level uncertain listener needs a prior over priors over priors, etc. The original
L0 ? S1 ? . . . recursion was bad enough, but at least each step had a constant cost. This new recursion
produces hypern -distributions for which inference almost immediately becomes intractable even in principle,
since the dimensionality of the learning problem increases with each step. Yet, without this addition of new
uncertainty at each level, the model would dissolve back into certainty as in the previous paragraph, making
learning impossible.
3
Phenomenon
Ref. WL PI PI+U PI+WL Section
Interpreting scalar implicature
Interpreting Horn implicature
Learning literal meanings despite scalar implicature
Disambiguating new words using old words
Learning new words using old words
Disambiguation without learning
Emergence of novel & efficient lexicons
Lexicalization of Horn implicature
[14]
[15]
[21]
[22]
[22]
[16]
[11]
[15]
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
3.1
3.2
4.1
4.2
4.2
4.2
5.1
5.2
Table 1: Empirical results and references. WL refers to the word learning model of [10]; PI refers
to the recursive pragmatic inference model of [9]; PI+U refers to the pragmatic inference model of
[3] which includes lexical uncertainty, marginalizes it out, and then recurses. Our current model is
referred to here as PI+WL, and combines pragmatic inference with word learning.
Here L?s data consists of her previous experience with language. In particular in the iterated games
explored here it consists of S?s previous utterances together with whatever other information L may
have about their intended referents (e.g. from contextual clues). By assumption (b), L treats these
utterances as samples from the knowledgeable speaker Sn?2 , not S, and thus as being informative
about the lexicon. For instance, when the data is a set of fully observed word-referent pairs {wi , oi }:
Y
P (lexicon|L?s data) ? P (lexicon)
PSn?2 (wi |oi , lexicon)
(5)
i
The top-level speaker S attempts to select the word which soft-maximizes their utility, with utility
now being defined in terms of the informativity of the expectation (over lexicons) that the listener
will have for the right referent4 :
PS (word|object, S?s data) ?
(6)
Z
exp ? log PLn?1 (object|word, lexicon)P (lexicon|S?s data) d(lexicon) ? cost(word)
Here P (lexicon|S?s data) is defined similarly, when S observes L?s interpretations of various utterances, and treats them as samples from Ln?1 , not L. However, notice that if S and L have the
same subjective distributions over lexicons, then S is approximately optimal with respect to L in the
same sense that Sk is approximately optimal with respect to Lk?1 . In one-shot games, this model
is conceptually equivalent to that of [3] restricted to n = 3; our key innovations are that we allow
learning by replacing their P (lexicon) with P (lexicon|data), and provide a theoretical justification
for how this learning can occur.
In the remainder of the paper, we apply the model described above to a set of one-shot pragmatic
inference games that have been well-studied in linguistics [14, 15] and are addressed by previous
one-shot models of pragmatic inference [9, 3]. These situations set the stage for simulations investigating how learning proceeds in iterated versions of such games, described in the following section.
Results captured by our model and previous models are summarized in Table 1. In our simulations
throughout, we somewhat arbitrarily set the recursion depth n = 3 (the minimal value that produces
all the qualitative phenomena), ? = 3, and assume that all agents have shared priors on the lexicon
and full knowledge of the cost function. Inference is via importance sampling from a Dirichlet prior
over lexicons.
3
Pragmatic inference in one-shot games
3.1 Scalar implicature. Many sets of words in natural language form scales in which each term
makes a successively stronger claim. ?Some? and ?all? form a scale of this type. While ?I ate some
4
An alternative model would have the speaker take the expectation over informativity, instead of the informativity of the expectation, which would correspond to slightly different utility functions. We adopt the current
formulation for consistency with [3].
4
of the cookies? is compatible with the followup ?in fact, I ate all of the cookies,? the reverse is not
true. ?Might? and ?must? are another example, as are ?OK,? ?good,? and ?excellent.? All of these
scales allow for scalar implicatures [14]: the use of a less specific term pragmatically implies that
the more specific term does not apply. So although ?I ate some of the cookies? could in principle be
compatible with eating ALL of them, the listener is lead to believe that SOME - BUT- NOT- ALL is the
likely state of affairs. The recursive pragmatic reasoning portions of our model capture findings on
scalar implicature in the same manner as previous models [3, 13].
3.2 Horn implicature. Consider a world which contains two words and two types of objects. One
word is expensive to use, and one is cheap (call them ?expensive? and ?cheap? for short). One object
type is common and one is rare; denote these COMMON and RARE. Intuitively, there are two possible
communicative systems here: a good system where ?cheap? referes to COMMON and ?expensive?
refers to RARE, and a bad system where the opposite holds. Obviously we would prefer to use the
good system, but it has historically proven very difficult to derive this conclusion in a game theoretic
setting, because both systems are stable equilibria: if our partner uses the bad system, then we would
rather follow and communicate at some cost than switch to the good system and fail entirely [3].
Humans, however, unlike traditional game theoretic models, do make the inference that given two
otherwise equivalent utterances, the costly utterance should have a rare or unusual meaning. We
call this pattern Horn implicature, after [15]. For instance, ?Lee got the car to stop? implies that
Lee used an unusual method (e.g. not the brakes) because, had he used the brakes, the speaker
would have chosen the simpler and shorter (less costly) expression, ?Lee stopped the car? [15].
Surprisingly, Bergen et al. [3] show that the key to achieving this favorable result is ignorance. If
a listener assigns equal probability to her partner using the good system or the bad system, then
their best bet is to estimate PS (word|object) as the average of PS (word|object, good system) and
PS (word|object, bad system). These might seem to cancel out, but in fact they do not. In the good
system, the utilities of the speaker?s actions are relatively strongly separated compared to the bad
system; therefore, a soft-max agent in the bad system has noiser behavior than in the good system,
and the behavior in the good system dominates the average. Similar reasoning applies to an uncertain
speaker. For example, in our model with a uniform prior over lexicons and Pprior (COMMON) =
0.8, cost(?cheap?) = 0.5, cost(?expensive?) = 1.0, the symmetry breaks in the appropriate way:
Despite total ignorance about the conventional system, our modeled speakers prefer to use simple
words for common referents (PS (?cheap?|COMMON) = 0.88, PS (?cheap?|RARE) = 0.46), and
listeners show a similar bias (PL (COMMON|?cheap?) = 0.77, PL (COMMON|?expensive?) = 0.65).
This preference is weak; the critical point is that it exists at all, given the unbiased priors. We return
to this in ?5.2. [3] report a much stronger preference, which they accomplish by applying further
layers of pragmatic recursion on top of these marginal distributions. On the one hand, this allows
them to better fit their empirical data; on the other, it removes the possibility of learning the literal
lexicon that underlies pragmatic inference ? further recursion above the uncertainty means that it is
only hypothetical agents who are ignorant, while the actual speaker and listener have no uncertainty
about each other?s generative process.
4
Pragmatics in learning from a knowledgable speaker
4.1 Learning literal meanings despite scalar implicatures. The acquisition of quantifiers like
?some? provides a puzzle for most models of word learning: given that in many contexts, the word
?some? is used to mean SOME - BUT- NOT- ALL, how do children learn that SOME - BUT- NOT- ALL is
not in fact its literal meaning? Our model is able to take scalar implicatures into account when learning, and thus provide a potential solution, congruent with the observation that no known language
in fact lexicalizes SOME - BUT- NOT- ALL [21].
Following the details of ?3.1, we created a simulation in which the model?s prior fixed the meaning of ?all? to be a particular set ALL, but was ambiguous about whether ?some? literally meant
SOME - BUT- NOT- ALL (incorrect) or SOME - BUT- NOT- ALL OR ALL (correct). The model was then
exposed to training situations in which ?some? was used to refer to SOME - BUT- NOT- ALL. Despite
this training, the model maintained substantial posterior probability on the correct hypothesis about
the meaning of ?some.? Essentially, the model reasoned that although it had unambiguous evidence
for ?some? being used to refer to SOME - BUT- NOT- ALL, this was nonetheless consistent with a literal meaning of SOME - BUT- NOT- ALL OR ALL which had then been pragmatically strengthened.
5
P(L understands S)
1.0
Run 1
0.5
0.0
Run 2
1
Run 1
2
2 words, 2 objects
3
4
5
6
7
Dialogue turn
Run 1
8
9
10
1
2
Run 1
Run 2
3
4
5
3 words, 3 objects
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Dialogue turn
Run 2
words
Run 2
objects
Figure 1: Simulations of two pragmatic agents playing a naming game. Each panel shows two
representative simulation runs, with run 1 chosen to show strong convergence and run 2 chosen to
show relatively weaker convergence. At each stage, S and L have different, possibly contradictory
posteriors over the conventional, consensus lexicon. From these posteriors we derive the probability
P (L understands S) (marginalizing over target objects and word choices), and also depict graphically S?s model of the listener (top row), and L?s actual model (bottom row).
Thus, a pragmatically-informed learner might be able to maintain the true meaning of SOME despite
seemingly conflicting evidence.
4.2 Disambiguation using known words. Children, when presented with both a novel and a
familiar object (e.g. an eggbeater and a ball), will treat a novel label (e.g. ?dax?) as referring to the
novel object, for example by supplying the eggbeater when asked to ?give me the dax? [22]. This
phenomenon is sometimes referred to as ?mutual exclusivity.? Simple probabilistic word learning
models can produce a similar pattern of findings [10], but all such models assume that learners retain
the mapping between novel word and novel object demonstrated in the experimental situation. This
observation is contradicted, however, by evidence that children often do not retain the mappings that
are demonstrated by their inferences in the moment [16].
Our model provides an intriguing possible explanation of this finding: when simulating a single
disambiguation situation, the model gives a substantial probability (e.g. 75%) that the speaker is
referring to the novel object. Nevertheless, this inference is not accompanied by an increased belief
that the novel word literally refers to this object. The learner?s interpretation arises not from lexical
mapping but instead from a variant of scalar implicature: the listener knows that the familiar word
does not refer to the novel object?hence the novel word will be the best way to refer to the novel
object, even if it literally could refer to either. Nevertheless, on repeated exposure to the same novel
word, novel object situation, the learner does learn the mapping as part of the lexicon (congruent
with other data on repeated training on disambiguation situations [4]).
5
Pragmatic reasoning in the absence of conventional meanings
5.1 Emergence of efficient communicative conventions. Experimental results suggest that communicators who start without a usable communication system are able to establish novel, consensusbased systems. For example, adults playing a communication game using only novel symbols with
no conventional meaning will typically converge on a set of new conventions which allow them to
accomplish their task [11]. Or in a less extreme example, communicators asked to refer to novel
objects invent conventional names for them over the course of repeated interactions (e.g., ?the ice
skater? for an abstract figure vaguely resembling an ice skater, [7]). From a pure learning perspective
this behavior is anomalous, however: Since both agents know perfectly well that there is no existing
convention to discover, there is nothing to learn from the other?s behavior. Furthermore, even if only
one partner is producing the novel expressions, their behavior in these studies still becomes more
regular (conventional) over time, which would seem to rule out a role for learning?even if there is
some pattern in the expressions the speaker chooses to use, there is certainly nothing for the speaker
to learn by observing these patterns, and thus their behavior should not change over time.
6
P(L understands S)
Run 1
1.0
Run 2
0.5
0.0
1
2
Run 1
3
Dialogue turn
4
5
6
7
8
9
Figure 2: Example simulations showing the lexicalization of Horn implicatures. Plotting conventions are as above. In the first run, speaker and
listener converge on a sparse and efficient communicative equilibrium, in which ?cheap? means
COMMON and ?expensive? means RARE,
while in the second they reach a sub-optimal
equilibrium. As shown in Fig. 3, the former is
more typical.
10
words
Run 2
objects
1.0
Horn lexicalization rate
Mean P(L understands S)
1.0
0.6
0.6
0.4
0.4
2x2 uniform prior
3x3 uniform prior
Horn implicature
0.2
0.0
Good lexicon
Bad lexicon
0.8
0.8
0
10
20
30
Dialogue turn
40
0.2
0.0
0
10
20
30
Dialogue turn
40
Figure 3: Averaged behavior
over 300 dialogues as in Figs. 1
and 2. Left: Communicative
success by game type and dialogue turn. Right: Proportion of
dyads in the Horn implicature
game (?5.2) who have converged on the ?good? or ?bad?
lexicons and believe that these
are literal meanings.
To model such phenomena, we imagine two agents playing the simple referential game introduced
in ? 2. On each turn the speaker is assigned a target object, utters some word referring to this object,
the listener makes a guess at the object, and then, critically, the speaker observes the listener?s
guess and the listener receives feedback indicating the correct answer (i.e., the speaker?s intended
referent). Both agents then update their posterior over lexicons before proceeding to the next trial.
As in [19, 7], the speaker and listener remain fixed in the same role throughout.
Fig. 1 shows the result of simulating several such games when both parties begin with a uniform prior
over lexicons. Notice that: (a) agents? performance begins at chance, but quickly rises ? a communicative system emerges where none previously existed; (b) they tend towards structured, sparse
lexicons with a one-to-one correspondence between objects and words ? these communicative systems are biased towards being useful and efficient; and (c) as the speaker and listener have entirely
different data (the listener?s interpretations and the speaker?s intended referent, respectively), unlucky early guesses can lead them to believe in entirely contradictory lexicons?but they generally
recover and converge. Each agent effectively uses their partner?s behavior as a basis for forming
weak beliefs about the underlying lexicon that they assume must exist. Since they then each act on
these beliefs, and their partner uses the resulting actions to form new beliefs, they soon converge on
using similar lexicons, and what started as a ?superstition? becomes normatively correct. And unlike some previous models of emergence across multiple generations of agents [18, 25], this occurs
within individual agents in a single dialogue.
5.2 Lexicalization and loss of Horn implicatures. A stronger example of how pragmatics can
create biases in emerging lexicons can be observed by considering a version of this game played in
the ?cheap?/?expensive?/COMMON/RARE domain introduced in our discussion of Horn implicature
(?3.2). Here, a uniform prior over lexicons, combined with pragmatic reasoning, causes each agent
to start out weakly biased towards the associations ?cheap? ? COMMON, ?expensive? ? RARE. A
fully rational listener who observed an uncertain speaker using words in this manner would therefore
discount it as arising from this bias, and conclude that the speaker was, in fact, highly uncertain. Our
convention-based listener, however, believes that speakers do know which convention is in use, and
therefore tends to misinterpret this biased behavior as positive evidence that the ?good? system is in
use. Similarly, convention-based speakers will wager that since on average they will succeed more
often if listeners are using the ?good? system, they might as well try it. When they succeed, they
take their success as evidence that the listener was in fact using the good system all along. As a
result, dyads in this game end up converging onto a stable system at a rate far above chance, and
7
preferentially onto the ?good? system (Figs. 2 and 3).
In the process, though, something interesting happens. In this model, Horn implicatures depend on
uncertainty about literal meaning. As the agents gather more data, their uncertainty is reduced, and
thus through the course of a dialogue, the implicature is replaced by a belief that ?cheap? literally
means COMMON (and did all along). To demonstrate this phenomenon, we queried each agent in
each simulated dyad about how they would refer to or interpret each object and word, if the two
objects were equally common, which cancels the Horn implicature. As shown in Fig. 3 (right), after
30 turns, in nearly 70% of dyads both S and L used the ?good? mapping even in this implicature-free
case, while less than 20% used the ?bad? mapping (with the rest being inconsistent).
This points to a fundamental difference in how learning interacts with Horn versus scalar implicatures. Depending on the details of the input, it is possible for our convention-based agents to observe
pragmatically strengthened uses of scalar terms (e.g., ?some? used to refer to SOME - BUT- NOT- ALL),
without becoming confused into thinking that ?some? literally means SOME - BUT- NOT- ALL (?4.1).
This occurs because scalar implicature depends only on recursive pragmatic reasoning (?2.1), which
our convention-based agents? learning rules are able to model and correct for. But, while our agents
are able to use Horn implicatures in their own behaviour (? 3.2), this happens implicitly as a result
of their uncertainty, and our agents do not model the uncertainty of other agents; thus, when they
observe other agents using Horn implicatures, they cannot interpret this behavior as arising from an
implicature. Instead, they take it as reflecting the actual literal meaning. And this result isn?t just
a technical limitation of our implementation, but is intrinsic to our convention-based approach to
combining pragmatics and learning: in our system, the only thing that makes word learning possible at all is each agent?s assumption that other agents are better informed; otherwise, other agents?
behavior would not provide any useful data for learning. Our model therefore makes the interesting
prediction that all else being equal, uncertainty-based implicatures should over time be more prone
to lexicalizing and becoming part of literal meaning than recursion-based implicatures are.
6
Conclusion
Language learners and language users must consider word meanings both within and across contexts. A critical part of this process is reasoning pragmatically about agents? goals in individual
situations. In the current work we treat agents communicating with one another as assuming that
there is a shared conventional lexicon which they both rely on, but with differing degrees of knowledge. They then reason recursively about how this lexicon should be used to convey particular
meanings in context. These assumptions allow us to create a model that unifies two previously separate strands of modeling work on language usage and acquisition and account for a variety of new
phenomena. In particular, we consider new explanations of disambiguation in early word learning
and the acquisition of quantifiers, and demonstrate that our model is capable of developing novel and
efficient communicative systems through iterated learning within the context of a single simulated
conversation.
Our assumptions produce a tractable model, but because they deviate from pure rationality, they
must introduce biases, of which we identify two: a tendency for pragmatic speakers and listeners to
accentuate useful, sparse patterns in their communicative systems (?5.1), and for short, ?low cost?
expressions to be assigned to common objects (?5.2). Strikingly, both of these biases systematically
drive the overall communicative system towards greater global efficiency. In the long term, these
processes should leave their mark on the structure of the language itself, which may contribute to
explaining how languages become optimized for effective communication [26, 24].
More generally, understanding the interaction between pragmatics and learning is a precondition to
developing a unified understanding of human language. Our work here takes a first step towards
joining disparate strands of research that have treated language acquisition and language use as
distinct.
Acknowledgments
This work was supported in part by the European Commission through the EU Cognitive Systems Project Xperience (FP7-ICT-270273), the John S. McDonnell Foundation, and ONR grant
N000141310287.
8
References
[1] D.A. Baldwin. Early referential understanding: Infants? ability to recognize referential acts for what they
are. Developmental Psychology, 29(5):832?843, 1993.
[2] D. Barner, N. Brooks, and A. Bale. Accessing the unsaid: The role of scalar alternatives in childrens
pragmatic inference. Cognition, 118(1):84, 2011.
[3] L. Bergen, N. D. Goodman, and R. Levy. That?s what she (could have) said: How alternative utterances
affect language use. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, 2012.
[4] R.A.H. Bion, A. Borovsky, and A. Fernald. Fast mapping, slow learning: Disambiguation of novel word?
object mappings in relation to vocabulary learning at 18, 24, and 30months. Cognition, 2012.
[5] C. F. Camerer, T.-H. Ho, and J.-K. Chong. A cognitive hierarchy model of games. The Quarterly Journal
of Economics, 119(3):861?898, 2004.
[6] E.V. Clark. On the logic of contrast. Journal of Child Language, 15:317?335, 1988.
[7] Herbert H Clark and Deanna Wilkes-Gibbs. Referring as a collaborative process. Cognition, 22(1):1?39,
1986.
[8] R. Dale and E. Reiter. Computational interpretations of the gricean maxims in the generation of referring
expressions. Cognitive Science, 19(2):233?263, 1995.
[9] M. C. Frank and N. D. Goodman. Predicting pragmatic reasoning in language games. Science,
336(6084):998?998, 2012.
[10] M. C. Frank, N. D. Goodman, and J. B. Tenenbaum. Using speakers? referential intentions to model early
cross-situational word learning. Psychological Science, 20:578?585, 2009.
[11] B. Galantucci. An experimental study of the emergence of human communication systems. Cognitive
science, 29(5):737?767, 2005.
[12] D. Golland, P. Liang, and D. Klein. A game-theoretic approach to generating spatial descriptions. In
Proceedings of EMNLP 2010, pages 410?419. Association for Computational Linguistics, 2010.
[13] Noah D. Goodman and Andreas Stuhlm?uller. Knowledge and implicature: Modeling language understanding as social cognition. Topics in Cognitive Science, 5:173?184, 2013.
[14] H.P. Grice. Logic and conversation. Syntax and Semantics, 3:41?58, 1975.
[15] L. Horn. Toward a new taxonomy for pragmatic inference: Q-based and r-based implicature. In Meaning,
form, and use in context, volume 42. Washington: Georgetown University Press, 1984.
[16] J. S. Horst and L. K. Samuelson. Fast mapping but poor retention by 24-month-old infants. Infancy,
13(2):128?157, 2008.
[17] G. Kachergis, C. Yu, and R. M. Shiffrin. An associative model of adaptive inference for learning word?
referent mappings. Psychonomic Bulletin & Review, 19(2):317?324, April 2012.
[18] S. Kirby, H. Cornish, and K. Smith. Cumulative cultural evolution in the laboratory: An experimental
approach to the origins of structure in human language. Proceedings of the National Academy of Sciences,
105(31):10681?10686, 2008.
[19] R. M. Krauss and S. Weinheimer. Changes in reference phrases as a function of frequency of usage in
social interaction: A preliminary study. Psychonomic Science, 1964.
[20] T. Kwiatkowski, S. Goldwater, L. Zettlemoyer, and M. Steedman. A probabilistic model of syntactic
and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the 13th
Conference of the European Chapter of the Association for Computational Linguistics, pages 234?244,
2012.
[21] S.C. Levinson. Presumptive meanings: The theory of generalized conversational implicature. MIT Press,
2000.
[22] E. M. Markman and G. F. Wachtel. Children?s use of mutual exclusivity to constrain the meanings of
words. Cognitive Psychology, 20:121?157, 1988.
[23] A. Papafragou and J. Musolino. Scalar implicatures: Experiments at the semantics-pragmatics interface.
Cognition, 86(3):253?282, 2003.
[24] S. T. Piantadosi, H. Tily, and E. Gibson. Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences, 108(9):3526 ?3529, 2011.
[25] R. van Rooy. Evolution of conventional meaning and conversational principles. Synthese, 139(2):331?
366, 2004.
[26] G. Zipf. The Psychobiology of Language. Routledge, London, 1936.
9
| 4929 |@word trial:1 private:1 version:2 stronger:4 proportion:1 simulation:6 pick:1 dramatic:1 shot:6 recursively:2 moment:1 contains:1 subjective:1 existing:1 current:5 contextual:3 yet:2 intriguing:2 must:9 john:1 informative:1 shape:1 cheap:11 wanted:1 remove:2 update:2 depict:1 infant:2 cue:1 instantiate:1 guess:5 item:1 generative:1 proficient:1 affair:1 smith:3 core:1 short:2 supplying:1 provides:4 contribute:1 lexicon:67 preference:2 simpler:1 synthese:1 along:2 become:2 incorrect:2 consists:2 qualitative:1 combine:1 paragraph:1 introduce:1 manner:2 behavior:14 themselves:2 multi:2 resolve:3 little:1 actual:3 piantadosi:1 window:2 considering:3 becomes:4 begin:4 discover:1 underlying:1 dax:2 maximizes:3 didn:1 agnostic:1 cultural:1 what:8 panel:1 kind:2 emerging:1 informed:2 spoken:1 finding:4 differing:1 unified:1 ought:1 guarantee:2 certainty:1 hypothetical:1 act:2 uk:1 whatever:2 grant:1 producing:2 ice:2 before:1 positive:1 retention:1 treat:4 tends:1 consequence:2 despite:6 joining:1 becoming:2 approximately:3 might:5 studied:1 limited:2 averaged:1 directed:1 horn:16 acknowledgment:1 recursive:7 implement:3 x3:1 illusion:1 cornish:1 cold:1 procedure:1 empirical:2 gibson:1 got:1 intention:2 word:83 refers:6 integrating:1 induce:1 pln:2 suggest:1 get:1 regular:1 close:2 marginalize:2 onto:2 cannot:1 context:12 impossible:1 applying:1 conventional:9 equivalent:2 demonstrated:2 lexical:3 brake:2 go:1 graphically:1 exposure:1 independently:1 resembling:1 economics:1 assigns:2 immediately:1 pure:2 communicating:1 insight:1 rule:3 array:1 importantly:1 fill:1 justification:1 hierarchy:1 target:3 suppose:1 user:6 strengthen:1 speak:1 imagine:1 us:5 rationality:1 hypothesis:1 origin:1 expensive:8 native:1 labeled:1 exclusivity:2 observed:3 role:4 bottom:1 baldwin:1 capture:2 precondition:1 grice:1 contradicted:1 eu:1 observes:2 deeply:1 substantial:3 accessing:1 developmental:1 asked:3 dynamic:1 weakly:1 depend:1 exposed:1 efficiency:1 learner:9 basis:1 strikingly:1 various:1 listener:46 chapter:1 separated:1 distinct:2 fast:2 describe:1 effective:1 london:1 apparent:1 stanford:2 otherwise:2 ability:1 syntactic:1 think:1 highlighted:1 jointly:1 emergence:5 noisy:1 final:2 obviously:1 confronted:1 seemingly:1 itself:1 associative:1 interaction:6 recurses:1 remainder:1 combining:1 shiffrin:1 academy:2 description:2 convergence:2 optimum:1 p:6 produce:4 generating:2 congruent:2 leave:1 object:50 help:1 depending:2 informs:1 ac:1 derive:2 strong:1 implies:2 convention:11 correct:6 utters:1 human:6 material:1 implementing:3 explains:1 behaviour:1 assign:1 generalization:2 preliminary:1 comprehension:1 pl:3 hold:1 exp:2 great:1 equilibrium:5 puzzle:2 mapping:10 cognition:5 claim:1 vary:1 adopt:1 early:4 failing:1 favorable:1 label:2 communicative:11 wl:4 vice:1 create:4 hope:1 uller:1 brought:1 clearly:1 mit:1 reaching:1 rather:3 bet:1 eating:1 linguistic:1 l0:3 she:5 referent:6 likelihood:1 contrast:1 sense:1 inference:27 bergen:2 typically:2 eaten:1 relation:1 her:2 semantics:2 unobservable:1 overall:1 issue:1 among:1 spatial:1 mutual:2 marginal:1 equal:2 never:2 reasoned:1 sampling:1 washington:1 identical:1 represents:1 yu:1 cancel:2 nearly:2 markman:1 thinking:1 superstition:1 others:1 report:1 recognize:1 national:2 individual:2 ignorant:1 familiar:3 replaced:1 intended:3 maintain:1 attempt:5 investigate:1 possibility:2 highly:1 steedman:1 certainly:1 chong:1 introduces:1 unlucky:1 extreme:1 recurse:1 wager:1 pprior:3 implication:1 capable:1 explosion:1 experience:1 shorter:1 literally:5 old:3 cooky:7 re:1 gricean:1 theoretical:1 minimal:1 uncertain:10 stopped:1 instance:2 increased:1 soft:4 modeling:2 psychological:1 phrase:1 cost:9 entry:1 rare:8 uniform:5 too:1 commission:1 answer:1 accomplish:2 my:3 chooses:2 referring:6 person:1 combined:2 fundamental:1 retain:2 probabilistic:3 systematic:1 off:1 lee:3 wepcan:1 michael:1 together:3 quickly:1 successively:1 containing:1 choose:1 possibly:1 marginalizes:1 emnlp:1 literal:17 worse:1 external:1 cognitive:7 dialogue:9 usable:1 return:1 account:3 potential:2 accompanied:1 summarized:1 includes:1 depends:1 view:2 pl0:3 break:1 try:1 observing:3 dissolve:1 start:3 bayes:1 participant:3 option:1 portion:1 recover:1 contribution:1 collaborative:1 oi:2 nathaniel:2 who:9 correspond:1 identify:1 samuelson:1 camerer:1 conceptually:1 goldwater:1 weak:2 bayesian:1 unifies:2 iterated:6 accurately:1 critically:1 none:1 psychobiology:1 drive:1 converged:1 reach:1 ed:1 nonetheless:1 acquisition:6 frequency:1 obvious:1 associated:1 rational:4 stop:1 begun:1 conversation:4 knowledge:4 dimensionality:1 car:2 emerges:1 formalize:1 back:1 understands:4 reflecting:1 ok:1 follow:2 april:1 formulation:1 though:5 knowledgeable:2 strongly:1 furthermore:1 just:2 stage:2 until:1 hand:1 receives:2 replacing:1 pragmatically:5 believe:3 normatively:2 usage:2 name:1 true:4 unbiased:1 former:1 hence:2 assigned:2 evolution:2 laboratory:1 semantic:2 reiter:1 deal:1 ignorance:2 game:26 ambiguous:2 speaker:46 maintained:1 unambiguous:1 generalized:1 trying:1 syntax:1 theoretic:3 demonstrate:2 interpreting:2 interface:1 reasoning:17 meaning:33 novel:23 common:15 psychonomic:2 volume:1 association:4 interpretation:4 he:1 interpret:5 refer:10 communicator:4 versa:1 queried:1 gibbs:1 zipf:1 routledge:1 consistency:1 session:1 similarly:2 language:39 had:5 replicating:1 stable:4 longer:1 etc:1 add:1 something:1 posterior:6 own:2 recent:1 perspective:1 optimizing:1 reverse:1 onr:1 continue:2 arbitrarily:1 success:2 fernald:1 captured:2 herbert:1 greater:1 somewhat:1 converge:4 maximize:1 relates:1 full:1 multiple:1 levinson:1 infer:2 technical:1 match:1 characterized:1 adapt:1 cross:1 long:1 naming:1 equally:1 prediction:1 converging:1 basic:1 underlies:1 variant:1 invent:2 expectation:4 essentially:1 anomalous:1 grounded:1 sometimes:2 normalization:1 accentuate:1 invert:1 proposal:1 addition:1 remarkably:1 want:1 separately:1 addressed:1 golland:1 else:3 situational:1 zettlemoyer:1 goodman:5 biased:3 rest:1 unlike:2 bringing:1 tend:1 thing:1 inconsistent:1 seem:3 call:2 near:1 noting:1 constraining:1 enough:1 ps1:2 variety:2 switch:2 fit:1 psychology:3 followup:1 implicatures:12 perfectly:1 click:1 opposite:1 affect:1 andreas:1 whether:1 expression:5 utility:4 dyad:4 effort:1 speech:1 cause:1 action:3 generally:4 iterating:1 useful:3 involve:1 unimportant:1 referential:4 discount:1 tenenbaum:1 reduced:1 exist:1 s3:1 notice:2 arising:2 klein:1 implicature:20 key:2 nevertheless:2 falling:1 achieving:1 prey:1 vaguely:1 pl2:2 sum:1 run:16 you:2 uncertainty:20 communicate:1 place:2 almost:2 reasonable:1 throughout:2 unadorned:1 draw:1 disambiguation:6 decision:1 prefer:2 capturing:1 entirely:3 layer:1 simplification:1 display:1 correspondence:1 existed:1 played:1 annual:1 occur:1 noah:2 psn:1 your:1 constrain:1 x2:1 tily:1 simulate:2 conversational:2 relatively:4 structured:1 developing:2 project:1 ball:1 mcdonnell:1 poor:1 ate:5 remain:2 slightly:1 across:2 wi:2 kirby:1 making:4 s1:2 happens:2 intuitively:1 restricted:1 quantifier:2 ln:3 equation:1 previously:2 describing:2 eventually:1 mechanism:1 turn:9 fail:1 know:7 tractable:2 fp7:1 end:1 unusual:2 available:1 apply:2 observe:3 quarterly:1 appropriate:1 simulating:2 robustly:1 alternative:4 ho:1 original:2 top:4 dirichlet:1 linguistics:3 build:1 establish:1 society:1 question:1 occurs:2 strategy:8 costly:2 traditional:1 interacts:1 said:2 separate:1 simulated:2 me:2 topic:1 partner:6 argue:1 extent:2 consensus:2 trivial:1 reason:3 toward:1 assuming:4 length:1 modeled:2 preferentially:1 anxiety:1 innovation:1 difficult:2 liang:1 idiosyncratic:2 taxonomy:1 frank:3 rise:1 disparate:1 implementation:1 unknown:1 perform:1 observation:2 finite:2 behave:1 situation:10 incorporated:1 communication:8 paradox:3 frame:1 confused:1 arbitrary:1 introduced:2 pair:1 optimized:2 learned:1 conflicting:1 brook:1 adult:3 beyond:1 able:6 proceeds:2 deanna:1 pattern:5 mismatch:1 hear:1 max:1 explanation:2 everyone:4 belief:7 critical:2 difficulty:1 natural:1 rely:1 treated:1 predicting:1 recursion:7 historically:1 lk:2 created:1 started:1 presumptive:1 naive:2 utterance:12 isn:1 sn:2 deviate:2 prior:17 geometric:1 literature:1 understanding:5 ict:1 georgetown:1 marginalizing:4 review:1 loss:2 fully:5 generation:2 limitation:2 interesting:2 proven:1 versus:1 age:1 clark:2 foundation:1 agent:38 degree:1 gather:1 consistent:1 principle:3 plotting:1 systematically:2 playing:3 pi:6 row:2 prone:1 course:4 compatible:2 surprisingly:1 supported:1 soon:2 free:1 childrens:1 bias:5 allow:5 weaker:1 explaining:1 bulletin:1 sparse:3 edinburgh:1 van:1 feedback:1 raining:2 depth:1 world:2 avoids:1 vocabulary:1 cumulative:1 dale:1 horst:1 clue:1 adaptive:1 party:2 far:1 social:3 approximate:1 skill:1 implicitly:1 logic:2 global:2 investigating:1 conclude:1 don:1 rooy:1 sk:2 why:1 reality:1 additionally:1 table:2 learn:9 symmetry:2 interact:1 investigated:1 complex:2 excellent:1 european:2 domain:2 did:1 nothing:5 child:15 ref:1 repeated:3 convey:1 fig:5 referred:3 representative:1 screen:1 purportedly:1 strengthened:2 slow:1 sub:1 infancy:1 levy:1 young:3 hypern:2 bad:10 specific:5 showing:1 normative:3 symbol:1 explored:1 evidence:7 dominates:1 intractable:1 exists:1 intrinsic:1 effectively:1 importance:1 maxim:1 accomodate:1 intersection:1 simply:1 likely:2 forming:1 strand:3 scalar:13 applies:1 acquiring:1 corresponds:1 chance:2 relies:1 succeed:2 goal:8 month:2 towards:5 disambiguating:1 shared:4 absence:1 change:2 specifically:2 typical:2 reducing:1 contradictory:2 total:1 experimental:5 disregard:1 attempted:1 tendency:1 l4:1 pragmatic:41 formally:1 select:1 indicating:1 mark:1 stuhlm:1 modulated:1 meant:2 arises:3 phenomenon:8 |
4,342 | 493 | Some Approximation Properties of Projection
Pursuit Learning Networks
Ying Zhao Christopher G. Atkeson
The Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
This paper will address an important question in machine learning: What
kind of network architectures work better on what kind of problems? A
projection pursuit learning network has a very similar structure to a one
hidden layer sigmoidal neural network. A general method based on a
continuous version of projection pursuit regression is developed to show
that projection pursuit regression works better on angular smooth functions than on Laplacian smooth functions. There exists a ridge function
approximation scheme to avoid the curse of dimensionality for approximating functions in L 2(?d).
1
INTRODUCTION
Projection pursuit is a nonparametric statistical technique to find "interesting"
low dimensional projections of high dimensional data sets. It has been used for
nonparametric fitting and other data-analytic purposes (Friedman and Stuetzle,
1981, Huber, 1985). Approximation properties have been studied by Diaconis &
Shahshahani (1984) and Donoho & Johnstone (1989). It was first introduced into
the context of learning networks by Barron & Barron (1988). A one hidden layer
sigmoidal feedforward neural network approximates f(x.) using the structure (Figure
l(a)):
n
g(x.)
= 'E a:jlT(pj 9J x. + llj)
j=l
936
(1)
Some Approximation Properties of Projection Pursuit Learning Networks
(a)
(b)
Figure 1: (a) A One Hidden Layer Feedforward Neural Network. (b) A Projection
Pursuit Learning Network.
=
iT is a sigmoidal function. 8; are direction parameters with 118;11
I, and
6; are function parameters. A projection pursuit learning network based
on projection pursuit regression (PPR) (Friedman and Stuetzle, 1981) or a ridge
function approximation scheme (RA) has a very similar structure (Figure l(b?.
where
oj! P;,
g(x)
"
= Lg;(8Jx)
(2)
;=1
=
where 8; are also direction parameters with 118;11
1. The corresponding function
parameters are ridge functions 9; which are any smooth function to be learned
from the data. Since iT is replaced by a more general smooth function gj, projection
pursuit learning networks can be viewed as a generalization of one hidden layer
sigmoidal feedforward neural networks. This paper will discuss some approximation
properties of PPR:
1. Projection pursuit learning networks work better on angular smooth functions
than on Laplacian smooth functions. Here "work better" means that for fixed
complexities of hidden unit functions and a certain accuracy, fewer hidden units
are required. For the two dimensional case (d = 2), Donoho and Johnstone
(1989) show this result using equispaced directions. The equispaced directions
may not be available when d > 2. We use a set of directions generated from
zeros of orthogonal polynomials and uniformly distributed directions on an unit
sphere instead. The analysis method in D "J's paper is limited to two dimensions. We apply the theory of spherical harmonics (Muller, 1966) to develop a
continuous ridge function representation of any arbitray smooth functions and
then employ different numerical integration sehemes to discretize it for eases
when d> 2.
2. The curse of dimensionality can be avoided when a proper ridge function approximation is applied. Once a continuous ridge function representation is established for any function in L 2(4)41), a Monte Carlo type of integration scheme
can be applied which has a RMS error convergence rate O(N-i) where N is
the number of ridge functions in the linear combinations. This is a similar
result to Barron's result (Barron, 1991) except that we have less restrictions
on the underlying function class.
937
938
Zhao and Atkeson
(b)
(a)
Figule 2: (a> A radial basil element
2
1014'
(b) a harmonic basis element
1110.
SMOOTHNESS CLASSES AND A L'( tPd) BASIS
We use L2(4),.) u our underlying function space with Gaussian meuure 4>4 =
II
(2~)ye-
I~IJ
JR.
. 11/112 =
P4>4dx. The smoothness classes characterise the rates of
convergence. Let A4 be the Laplacian operator and A4 be the Laplacian-Beltrami
operator (Muller, 1966). The smoothness classes can be defined u:
1 E L2(4)4) will be .aid to have Cartuian .moo#Jane..
01 order p i/ it hal p derivativu and thue derivativu are all in L2(<1>4) , It will
be .aid to have angular .moo#Jane.. 0/ order q i/ A~f 1 E L 2(<1>4) ' It will be .aid
to have Laplacian .moo#Jane.. 0/ order 7' i/ A41 E L 2(4)4)' Let;:, be the cia .. 0/
function. with Cartuian .moothneu P, A,f be #Jae cia.. 01 function. with Cartuian
.moothne.. p and angular .moo#Jane.. q and c,~ be the cla.. 0/ function. with
Cartuian .moothne.. p and Laplacian .moo#Jane.. 7',
Definition 1 The function
We derive an orthogonal basis in L2(<1>4) from the eigenfunctions of a self-adjoint
operator, The basis element is defined as:
(3)
=
=
=
where x = 7'e, n
0, ",,00, m = 0, "" 00, j
I, "" N(d, n),1'm = (_2)mm!, (t
n + 422. S"i(e) are linearly independent spherical harmonies of degree n in d
dimensions (Muller, 1966), L!C;) is a Laguerre polynomial. The advantage of
the basis comes from its representation u a product of a spherical harmonic and
a radial polynomial. Specifically JOjm is a radial polynomial for n
0 and J"jO
is a harmonic polynomial for m = O. Figure 2(a),(b) show a radial buis element
and a harmonic basis element when n + 2m
8. The basis element J"jm has an
orthogonality:
=
=
where E denotes expectation with respect to <1>4' Since it is a basis in L2 (<1>4), any
function / E L2(<1>4) hu an expansion in terms of basis elements J"jm
1=
L ~imJ";m
".j,m
(5)
Some Approximation Properties of Projection Pursuit Learning Networks
'*
The circular harmonic einB is a special case of the spherical harmonic Snj (e). In two
dimensions, d = 2, N(d,n) = 2 and x = (rcos8,rsin8). The spherical harmonic
Snj(e) can be reduced to the following: Snl(e) =
cos n8, Sn2(e) =
sin n8,
which is the circular harmonic.
7:
Smoothness classes can also be defined qualitatively from expansions of functions
in terms of basis elements Jnjm. Since 111112 = 2:c!jmJ~jm' one can think of
Pnjm(J) =
E!i .,.J!i';
i.,.J ..;...
2
c ..
as representing the distribution of energy in
1 among
the
different modes of oscillation J njm . If 1 is cartesian smooth, Pnjm(J) peaks around
small n + 2m. If 1 is angular smooth, Pnjm (J) peaks around small n. If 1 is
Laplacian smooth, Pnjm (I) peaks around small m. To explain why PPR works
better on angular smooth functions than on Laplacian smooth functions, we first
examine how to represent these L 2(?d) basis elements systematically in terms of
ridge functions and then use the expansion (5) to derive a error bound of RA for
general smooth functions.
3
CONTINUOUS RIDGE FUNCTION SCHEMES
There exists a continuous ridge function representation for any function I(x) E
L 2 (?d) which is an integral of ridge functions through all possible directions.
I(x) =
1
g(xT TJ, TJ)dwd(TJ)?
(6)
n~
This works intuitively because any object is determined by any infinite set of radiographs. More precisely, any function I(x) E L 2 (?d) can be approximated arbitrarily
well by a linear combination of ridge functions 2:k g(xT TJk, TJk) provided infinitely
many combination units (Jones, 1987). As k --+ 00, we have (6). The natural
discrete approximation to (6) has the form: In(x) = 2:'1=1 Wjg(xTTJj,TJj), which
becomes the usual PPR (2). We proved a continuous ridge function representation
of basis elements Jnjm which is shown in Lemma 1.
Lenuna 1 The continuou6 ridge function repre6entation of Jnjm i6:
Jnjm(x) =
Anmd
1
Hn+2m (TJ T x)Snj(TJ)dWd(TJ)
(7)
n~
where
Anmd
i6 a con6tant and H n +2m (:l) i6 a Hermite polynomial.
Therefore any function 1 E L 2 (?d) has a continuous ridge function representation
(6) with
g(xT TJ,7J) =
C njm Anmd Hn+2m(XT 7J)Snj(7J)
(8)
Gaussian quadrature and Monte Carlo integration schemes can be used to discretize
L
(6).
4
GAUSSIAN QUADRATURE
Since Jn~ g(xT TJ, TJ)dwd(TJ) = Jn~_1 J~l g(xT TJ, TJ)(1 - t~_d ~;3 dtd_ldwd_l (TJd-d,
simple product rules using Gaussian quadrature formulae can be used here. tij, i =
939
940
Zhao and Atkeson
(a)
(b)
Figure 3: Directions (a) for a radial polynomial (b) for a hatmonic polynomial
=
d -1, ... , 1, j
1, ... , ft are lerGI of orthogonal polynomials with weight. (1 - t 2 )?.
N ,,4-1 points on the unit .phere {}4 can be formed using tii through
=
"/1- tLI 00. y1-if
[
tLI? .. tl
,,= Jl-
1
(9)
t4_1
r ", ,,)
If g(
is a polynomial of degree at mo.t 2" - 1 (in term. of t 1 , ... , t4- il, then
N = ,,4-1 point! (direction.) ate .ufficient to represent the integral exactly. This
can be demonstrated with two eX&lllples by taking d = 3.
Example 1: a radial function
24
d
+ 114 + %4 + 222112 + 222%2 + 2112%2 =Cl I
= 3." = 3.
nomial with tl
In.
(x T ,,)4dw a ('1)
(10)
,,2 = 9 direction. from (9) are sufficient to represent this poly-
= 0, /i, -/i (Ieros
of a degree 3 Legendre polynomial) and
= 0, llj. -~ ( .eros of a degree 3 Tschebyscheff polynomial). More directions
ate needed to represent a hatmonic function with exactly the same number of ternu
of monomials but with different coefficient?.
tl
EX&IIlpie 2: a hat monic function
i(82 4
+ 3114 + 3%4 -
2422112 - 2422%2
+ 6~%2) = C2
L.
(r,,)4S. i (,,)dw a(,,) (11)
,,=
where S4/(") = i(35t4 - 30t l + 3)." = t?a + v'f=1J'h'
5, ,,2 =
25 direction. from (9) ate sufficient to represent the polynomial with t2 =
0,0.90618, -0.90618, 0.53847, -0.53847 and tl = COl 2{;1 ~,j = I, ... , 5. The distribution of these directioDJ on a unit .phere ate shown in Figure 3(a) and (b).
In general, N = (ft + m + 1)4-1 direction. ate sufficient to represent J"im exactly
by using leros of orthogonal polynomial.. If p ,,+ 2m (the degree of the basis)
i. fixed, N
(p - m + 1)4-1 = (~ + 1)4-1 is minimiled when"
0 which
corresponds to the radial basis element. N is maximiled when m = 0 which is the
hat monic element. Using definition. of .moothnes. dassel in Section 2. We can
show that ridge function approximation works better on angular smooth functions.
The basic result i. as follow.:
=
=
=
Some Approximation Properties of Projection Pursuit Learning Networks
Theorem l i E .A"f' let RN 1 denote a .um 01 ridge Junction. which belt approzimate f by tI.ing a .et of direction. generated by zero. of orthogonal polynomial?.
Then
(12)
This error bound says that ridge function approximation does take advantage of
angular smoothness. Radial functions are the most angular smooth functions with
q = +00 and harmonic functions are the least angular smooth functions when the
Cartesian smoothness p is fixed. Therefore ridge function approximation works
better on angular smooth functions than on Laplacian smooth functions. Radial
and harmonic functions are the two extreme cases.
5
UNIFORMLY DISTRIBUTED DIRECTIONS ON
nd
Instead of using directions from zeros of orthogonal polynomials, N uniformly distributed directions on nd is an alternative to generalizing equispaced directions.
This is a Monte Carlo type of integration scheme on Oct.
To approximate the integral (7), N uniformly distributed directions 1/1, 112, ...... , TJN
on nd drawn from the density I(TJ) = l/wct on Od are used:
N
Jnjm(x) = ';; Anmd
L Hn+2m (XTTJIe)Snj(TJIe)
(13)
Ie=l
(14)
The variance is
u 2 (x)
u1(x) = - -
(15)
N
u2 (x) = In. [AnmdWdHn+2m(XTTJ)Snj(TJ) -
Jnjm(X)] 2 ..,l.dwd(TJ). Therefore
Jnjm(X) approximates Jnjm(x) with a rate u(x)/VN. The difference between a
radial basis and a harmonic basis is u(x). Let us average u(x) with respect to ?d:
where
Ilu(x)112 =
=
IIJnjmll 2 [ri~~;~:d 1)
- 1] = IIJnjmll 2anjm
For a fixed n + 2m
p, Ilu(x)112 is minimized at n
maximized at m 0 (a harmonic element).
=
=0
The same justification can be done for a general function
(8). A RA scheme is:
(16)
(a radial element) and
1 E L2( ?d)
with (6) and
941
942
Zhao and Atkeson
mN(x) = I(x), O';;(x) = (T2kX) , 0'2(x) = fOil wdg 2(xT TJ, TJ)dwct(TJ) - 12(x) and
110'(x)1I2 = L c!jmIIJnjmWanjm' Since anjm is small when n is small, large when m
is small and recall the distribution Pnjm(J) in Section 3, 110'11 2/11/112 is small when
I is smooth. Among these smooth functions, if I is angular smooth, 110'11 2/11/112
is smaller than that if I is Laplacian smooth. The RMS error convergence rate
IIli",1I = ,,)I~J:v is consequently smaller for I being angular smooth than for I being
Laplacian smooth. But both rates are D( N-i) no matter what class the underlying
fundion belongs to. The difference is the constant which is related to the distribution of energy in I among the different modes of oscillations (angular or Laplacian).
The radial and harmonic functions are two extremes.
6
THE CURSE OF DIMENSIONALITY IN RA
Generally, if N directions TJ1, 1/2, ...... , TJN, on Od drawn from any distribution p(TJ)
on the sphere Od to approximate (6)
I(x) = ~
t
9(XTTJ1c ,TJ1c?
N 1c=1
p( TJ1c)
mN =
lex)
IT;; =
C;;
where
(17)
0'2(x) = fOil p(~)g2(xTTJ,TJ)dwd(TJ) - 12(x).
111T(x)112 = I _(1) IIgl1I1 2dwd(TJ) -11/11 2 =
lOll P TJ
c/
And
(18)
Then
(19)
That is, I(x) --+ lex) with a rate D(N-~). Equation (19) shows that there is no
curse of dimensionality if a ridge function approximation scheme (17) is used for
lex). The same conclusion can be drawn when sigmoidal hidden unit function neural networks are applied to Barron's class of underlying function (Barron, 1991).
But our function class here is the function class that can be represented by a continuous ridge function (6), which is a much larger function class than Barron's. Any
function I E L 2 (?d) has a representation (6)(Section 4). Therefore, for any function I E L2( ?d), there exists a node function g(xT TJ, TJ) and related ridge function
approximation scheme (17) to approximate lex) with a rate D(N-i), which has no
curse of dimensionality. In other words, if we are allowed to choose a node function
g(xT TJ, TJ) according to the property of data, which is the characteristic ofPPR, then
ridge function approximation scheme can avoid the curse of dimensionality. That
is a generalization of Barron's result that the curse of dimensionality goes away if
certain types of node function (e.g., CO$ and 0') are considered.
The smoothness of a underlying function determines the size of the constant Ct. As
shown in the previous section, ifp(TJ) = l/wd (i.e., uniformly distributed directions),
then angular smooth functions have smaller c/ than Laplacian smooth functions do.
Choosing different p( TJ) does not change this conclusion. But a properly chosen p( TJ)
reduces
in general. If I(x) is smooth enough, the node function g(xT TJ, TJ) can be
c,
Some Approximation Properties of Projection Pursuit Learning Networks
computed from the Radon transform R"f of f in the direction
as
1]
which is defined
(20)
=
and we proved: g(xT1],1])
F-1(Fl1(t)ltld-1) It=X'l'l1, where Fl1(t) is the Fourier
transform of R"f(') and F- 1 denotes the inverse Fourier transform. In practice,
learning g(xT 11,11) is usually replaced by a smoothing step which seeks a one dimensional function to fit x T 1] best to the residual in this direction (Friedman and
Stuetzle, 1981, Zhao and Atkeson, 1991).
7
CONCLUSION
As we showed, PPR works better on angular smooth function than on Laplacian
smooth functions by discretizing a continuous ridge function representation. PPR
can avoid the curse of dimensionality by learning node functions from data.
Acknowledgments
Support was provided under Air Force Office of Scientific Research grant AFOSR89-0500. Support for CGA was provided by a National Science Foundation Presidential Young Investigator Award, an Alfred P. Sloan Research Fellowship, and the
W. M. Keck Foundation Associate Professorship in Biomedical Engineering. Special thanks goes to Prof. Zhengfang Zhou and Prof. Peter Huber at Math Dept. in
MIT, who provided useful discussions.
References
Barron, A. R. and Barron, R. L. (1988) "Statistical Learning Networks: A
Unifying View." Computing Science and Stati6tic6: Proceeding6 of 20th Symp06ium
on the Interface. Ed Wegman, editor, Amer. Statist. Assoc., Washington, D. C.,
192-203.
Barron, A. R. (1991) "Universal Approximation Bounds for Superpositions of
A Sigmoidal Function". TR. 58. Dept. of Stat., Univ. of nlinois at UrbanaChampaign.
Donoho, D. L. and Johnstone, I. (1989). "Projection-based Approximation,
and Duality with Kernel Methods". Ann. Statid., 17, 58-106.
Diaconis, P. and Shahshahani, M. (1984) "On Non-linear Functions of Linear
Combinations", SIAM J. Sci. Stat. Compt. 5, 175-191.
Friedman, J. H. and Stuetzle, W. (1981) "Projection Pursuit Regression". J.
Amer. Stat. Auoc., 76, 817-823.
Huber, P. J. (1985) "Projection Pursuit (with discussion)", Ann. Stati6t., 19,
495-4 75.
Jones, L. (1987) "On A Conjecture of Huber Concerning the Convergence of Projection Pursuit Regression". Ann. Statid., 15, 880-882.
Muller, C. (1966), Spherical Harmonic,. Lecture Notes in Mathematics, no.17.
Zhao, Y. and C. G. Atkeson (1991) "Projection Pursuit Learning", Proc.
IJCNN-91-SEATTLE.
943
| 493 |@word version:1 polynomial:16 nd:3 hu:1 seek:1 cla:1 tr:1 n8:2 wd:1 od:3 dx:1 moo:5 numerical:1 analytic:1 intelligence:1 fewer:1 math:1 node:5 sigmoidal:6 belt:1 hermite:1 c2:1 loll:1 fitting:1 huber:4 ra:4 examine:1 jlt:1 spherical:6 curse:8 jm:3 becomes:1 provided:4 underlying:5 proceeding6:1 what:3 kind:2 developed:1 ti:1 njm:2 exactly:3 um:1 assoc:1 unit:7 grant:1 engineering:1 studied:1 co:2 limited:1 professorship:1 acknowledgment:1 practice:1 tpd:1 stuetzle:4 universal:1 projection:20 word:1 radial:12 operator:3 context:1 restriction:1 demonstrated:1 go:2 rule:1 dw:2 justification:1 compt:1 arbitray:1 equispaced:3 associate:1 element:14 approximated:1 ft:2 complexity:1 basis:16 represented:1 univ:1 monte:3 artificial:1 choosing:1 imj:1 larger:1 say:1 presidential:1 think:1 transform:3 advantage:2 product:2 p4:1 tj1:1 adjoint:1 seattle:1 convergence:4 keck:1 object:1 derive:2 develop:1 urbanachampaign:1 stat:3 ij:1 come:1 direction:23 beltrami:1 generalization:2 im:1 mm:1 around:3 considered:1 mo:1 tjk:2 jx:1 purpose:1 proc:1 harmony:1 superposition:1 mit:1 gaussian:4 avoid:3 zhou:1 office:1 properly:1 hidden:7 among:3 smoothing:1 integration:4 special:2 once:1 washington:1 jones:2 minimized:1 t2:1 employ:1 diaconis:2 national:1 replaced:2 friedman:4 circular:2 extreme:2 tj:32 integral:3 orthogonal:6 monomials:1 thanks:1 density:1 peak:3 siam:1 ie:1 eas:1 jo:1 tjn:2 hn:3 choose:1 ilu:2 zhao:6 tii:1 coefficient:1 matter:1 sloan:1 radiograph:1 tli:2 view:1 air:1 formed:1 accuracy:1 il:1 variance:1 characteristic:1 who:1 maximized:1 carlo:3 explain:1 ed:1 definition:2 energy:2 proved:2 massachusetts:1 recall:1 dimensionality:8 follow:1 amer:2 done:1 angular:16 biomedical:1 christopher:1 mode:2 scientific:1 hal:1 ye:1 laboratory:1 shahshahani:2 i2:1 sin:1 self:1 ridge:24 l1:1 interface:1 harmonic:15 ifp:1 jl:1 approximates:2 cambridge:1 smoothness:7 mathematics:1 i6:3 gj:1 showed:1 belongs:1 certain:2 discretizing:1 arbitrarily:1 a41:1 muller:4 ii:1 dwd:6 reduces:1 smooth:29 ing:1 fl1:2 sphere:2 concerning:1 award:1 laplacian:14 regression:5 basic:1 expectation:1 represent:6 kernel:1 fellowship:1 wct:1 snj:6 eigenfunctions:1 feedforward:3 enough:1 fit:1 architecture:1 rms:2 peter:1 tij:1 generally:1 cga:1 useful:1 characterise:1 sn2:1 nonparametric:2 s4:1 statist:1 reduced:1 alfred:1 discrete:1 basil:1 drawn:3 pj:1 inverse:1 vn:1 oscillation:2 radon:1 layer:4 bound:3 ct:1 ijcnn:1 orthogonality:1 precisely:1 ri:1 u1:1 fourier:2 conjecture:1 llj:2 according:1 phere:2 combination:4 legendre:1 jr:1 ate:5 smaller:3 nomial:1 intuitively:1 equation:1 discus:1 needed:1 pursuit:18 available:1 junction:1 apply:1 barron:11 away:1 tjd:1 cia:2 fundion:1 alternative:1 jane:5 hat:2 jn:2 denotes:2 a4:2 unifying:1 prof:2 approximating:1 question:1 lex:4 usual:1 sci:1 ying:1 lg:1 proper:1 discretize:2 wegman:1 y1:1 rn:1 introduced:1 required:1 learned:1 established:1 address:1 usually:1 laguerre:1 oj:1 natural:1 force:1 residual:1 mn:2 representing:1 scheme:10 technology:1 ppr:6 l2:8 lecture:1 interesting:1 tjj:1 foundation:2 degree:5 sufficient:3 editor:1 systematically:1 foil:2 institute:1 johnstone:3 taking:1 distributed:5 dimension:3 qualitatively:1 avoided:1 atkeson:6 approximate:3 xt1:1 continuous:9 why:1 expansion:3 cl:1 poly:1 linearly:1 jae:1 allowed:1 quadrature:3 tl:4 aid:3 col:1 young:1 formula:1 theorem:1 xt:11 exists:3 cartesian:2 t4:2 generalizing:1 jmj:1 infinitely:1 g2:1 u2:1 corresponds:1 determines:1 ma:1 oct:1 viewed:1 donoho:3 consequently:1 ann:3 monic:2 change:1 infinite:1 determined:1 uniformly:5 except:1 specifically:1 lemma:1 duality:1 support:2 investigator:1 dept:2 ex:2 |
4,343 | 4,930 | Model Selection for High-Dimensional Regression
under the Generalized Irrepresentability Condition
Andrea Montanari
Stanford University
Stanford, CA 94305
[email protected]
Adel Javanmard
Stanford University
Stanford, CA 94305
[email protected]
Abstract
In the high-dimensional regression model a response variable is linearly related to
p covariates, but the sample size n is smaller than p. We assume that only a small
subset of covariates is ?active? (i.e., the corresponding coefficients are non-zero),
and consider the model-selection problem of identifying the active covariates.
A popular approach is to estimate the regression coefficients through the Lasso
(`1 -regularized least squares). This is known to correctly identify the active set
only if the irrelevant covariates are roughly orthogonal to the relevant ones, as
quantified through the so called ?irrepresentability? condition. In this paper we
study the ?Gauss-Lasso? selector, a simple two-stage method that first solves the
Lasso, and then performs ordinary least squares restricted to the Lasso active set.
We formulate ?generalized irrepresentability condition? (GIC), an assumption that
is substantially weaker than irrepresentability. We prove that, under GIC, the
Gauss-Lasso correctly recovers the active set.
1
Introduction
In linear regression, we wish to estimate an unknown but fixed vector of parameters ?0 ? Rp from n
pairs (Y1 , X1 ), (Y2 , X2 ), . . . , (Yn , Xn ), with vectors Xi taking values in Rp and response variables
Yi given by
Wi ? N(0, ? 2 ) ,
Yi = h?0 , Xi i + Wi ,
(1)
where h ? , ? i is the standard scalar product.
In matrix form, letting Y = (Y1 , . . . , Yn )T and denoting by X the design matrix with rows
X1T , . . . , XnT , we have
Y = X ?0 + W ,
W ? N(0, ? 2 In?n ) .
(2)
In this paper, we consider the high-dimensional setting in which the number of parameters exceeds
the sample size, i.e., p > n, but the number of non-zero entries of ?0 is smaller than p. We denote by
S ? supp(?0 ) ? [p] the support of ?0 , and let s0 ? |S|. We are interested in the ?model selection?
problem, namely in the problem of identifying S from data Y , X.
In words, there exists a ?true? low dimensional linear model that explains the data. We want to
identify the set S of covariates that are ?active? within this model. This problem has motivated a
large body of research, because of its relevance to several modern data analysis tasks, ranging from
signal processing [9, 5] to genomics [15, 16]. A crucial step forward has been the development of
model-selection techniques based on convex optimization formulations [17, 8, 6]. These formulations have lead to computationally efficient algorithms that can be applied to large scale problems.
Such developments pose the following theoretical question: For which vectors ?0 , designs X, and
1
noise levels ?, the support S can be identified, with high probability, through computationally efficient procedures? The same question can be asked for random designs X and, in this case, ?high
probability? will refer both to the noise realization W , and to the design realization X. In the rest of
this introduction we shall focus ?for the sake of simplicity? on the deterministic settings, and refer
to Section 3 for a treatment of Gaussian random designs.
The analysis of computationally efficient methods has largely focused on `1 -regularized least
squares, a.k.a. the Lasso [17]. The Lasso estimator is defined by
n 1
o
?bn (Y, X; ?) ? arg minp
kY ? X?k22 + ?k?k1 .
(3)
??R
2n
In case the right hand side has more than one minimizer, one of them can be selected arbitrarily for
our purposes. It is worth noting that when columns of X are in general positions (e.g. when the
entries of X are drawn form a continuous probability distribution), the Lasso solution is unique [18].
We will often omit the arguments Y , X, as they are clear from the context. (A closely related method
is the so-called Dantzig selector [6]: it would be interesting to explore whether our results can be
generalized to that approach.)
It was understood early on that, even in the large-sample, low-dimensional limit n ? ? at p
constant, supp(?bn ) 6= S unless the columns of X with index in S are roughly orthogonal to the
ones with index outside S [12]. This assumption is formalized by the so-called ?irrepresentability
b = (XT X/n). Letting
condition?, that can be stated in terms of the empirical covariance matrix ?
b
b
?A,B be the submatrix (?i,j )i?A,j?B , irrepresentability requires
b S c ,S ?
b ?1 sign(?0,S )k? ? 1 ? ? ,
k?
S,S
(4)
for some ? > 0 (here sign(u)i = +1, 0, ?1 if, respectively, ui > 0, = 0, < 0). In an early breakthrough, Zhao and Yu [23] proved that, if this condition holds with ? uniformly bounded away from
0, it guarantees correct model selection also in the high-dimensional regime p n. Meinshausen
and B?ulmann [14] independently established the same result for random Gaussian designs, with
applications to learning Gaussian graphical models. These papers applied to very sparse models, requiring in particular s0 = O(nc ), c < 1, and parameter vectors with large coefficients. Namely,
p scalb
ing the columns of X such that ?i,i ? 1, for i ? [p], they require ?min ? mini?S |?0,i | ? c s0 /n.
Wainwright [21] strengthened considerably these results by allowing for general scalings of s0 , p, n
and proving that much smaller non-zero coefficients can be detected. Namely,
he proved that for a
p
broad class of empirical covariances it is only necessary that ?min ? c? (log p)/n. This scaling
of the minimum non-zero entry is optimal up to constants. Also, for a specific classes of random
Gaussian designs (including X with i.i.d. standard Gaussian entries), the analysis of [21] provides
tight bounds on the minimum sample size for correct model selection. Namely, there exists c` , cu >
0 such that the Lasso fails with high probability if n < c` s0 log p and succeeds with high probability
if n ? cu s0 log p.
While, thanks to these recent works [23, 14, 21], we understand reasonably well model selection
via the Lasso, it is fundamentally unknown what model-selection performances can be achieved
with general computationally practical methods. Two aspects of of the above theory cannot
be
?
improved substantially: (i) The non-zero entries must satisfy the condition ?min ? c?/ n to be
detected with
? high probability. Even if n = p and the
? measurement directions Xi are orthogonal,
e.g., X = nIn?n , one would need |?0,i | ? c?/ n to distinguish the i-th entry from noise.
For instance, in [10], the authors prove a general upper bound on the minimax power of tests for
hypotheses H0,i = {?0,i = 0}. Specializing this bound to the case of standard Gaussian designs, the
analysis of [10] shows
? formally that no test can detect ?0,i 6= 0, with a fixed degree of confidence,
unless |?0,i | ? c?/ n. (ii) The sample size must satisfy n ? s0 . Indeed, if this is not the case,
for each ?0 with support of size |S| = s0 , there is a one parameter family {?0 (t) = ?0 + t v}t?R
with supp(?0 (t)) ? S, X?0 (t) = X?0 and, for specific values of t, the support of ?0 (t) is strictly
contained in S.
On the other hand, there is no fundamental reason to assume the irrepresentability condition (4).
This follows from the requirement that a specific method (the Lasso) succeeds, but is unclear why
it should be necessary in general. In this paper we prove that the Gauss-Lasso selector has nearly
optimal model selection properties under a condition that is strictly weaker than irrepresentability.
2
G AUSS -L ASSO SELECTOR: Model selector for high dimensional problems
Input: Measurement vector y, design model X, regularization parameter ?, support size s0 .
b
Output: Estimated support S.
n
b
1: Let T = supp(? ) be the support of Lasso estimator ?bn = ?bn (y, X, ?) given by
n 1
o
?bn (Y, X; ?) ? arg minp
kY ? X?k22 + ?k?k1 .
??R
2n
2: Construct the estimator ?bGL as follows:
?1 T
?bTGL = (XT
XT y ,
T XT )
?bTGL
c = 0.
GL
3: Find s0 -th largest entry (in modulus) of ?bTGL , denoted by ?b(s
, and let
0)
GL
Sb ? i ? [p] : |?biGL | ? |?b(s
| .
0)
We call this condition the generalized irrepresentability condition (GIC). The Gauss-Lasso procedure uses the Lasso to estimate a first model T ? {1, . . . , p}. It then constructs a new estimator by
ordinary least squares regression of the data Y onto the model T .
We prove that the estimated model is, with high probability, correct (i.e., Sb = S) under conditions
comparable to the ones assumed in [14, 23, 21], while replacing irrepresentability by the weaker
generalized irrepresentability condition. In the case of random Gaussian designs, our analysis further
assumes the restricted eigenvalue property in order to establish a nearly optimal scaling of the sample
size n with the sparsity parameter s0 .
In order to build some intuition about the difference between irrepresentability and generalized
irrepresentability, it is convenient to consider the Lasso cost function at ?zero noise?:
1
1
b ? ?0 )i + ?k?k1 .
kX(? ? ?0 )k22 + ?k?k1 = h(? ? ?0 ), ?(?
2n
2
Let ?bZN (?) be the minimizer of G( ? ; ?) and v ? lim??0+ sign(?bZN (?)). The limit is well defined
by Lemma 2.2 below. The KKT conditions for ?bZN imply, for T ? supp(v),
G(?; ?) ?
b T c ,T ?
b ?1 vT k? ? 1 .
k?
T,T
Since G( ? ; ?) has always at least one minimizer, this condition is always satisfied. Generalized
irrepresentability requires that the above inequality holds with some small slack ? > 0 bounded
away from zero, i.e.,
b T c ,T ?
b ?1 vT k? ? 1 ? ? .
k?
T,T
Notice that this assumption reduces to standard irrepresentability cf. Eq. (4) if, in addition, we ask
that v = sign(?0 ). In other words, earlier work [14, 23, 21] required generalized irrepresentability
plus sign-consistency in zero noise, and established sign consistency in non-zero noise. In this paper
the former condition is shown to be sufficient.
From a different point of view, GIC demands that irrepresentability holds for a superset of the true
support S. It was indeed argued in the literature that such a relaxation of irrepresentability allows to
cover a significantly broader set of cases (see for instance [3, Section 7.7.6]). However, it was never
clarified why such a superset irrepresentability condition should be significantly more general than
simple irrepresentability. Further, no precise prescription existed for the superset of the true support.
Our contributions can therefore be summarized as follows:
? By tying it to the KKT condition for the zero-noise problem, we justify the expectation that
generalized irrepresentability should hold for a broad class of design matrices.
? We thus provide a specific formulation of superset irrepresentability, prescribing both the
superset T and the sign vector vT , that is, by itself, significantly more general than simple
irrepresentability.
3
? We show that, under GIC, exact support recovery can be guaranteed using the Gauss-Lasso,
and formulate the appropriate ?minimum coefficient? conditions that guarantee this. As a
side remark, even when simple irrepresentability holds, our results strengthen somewhat
the estimates of [21] (see below for details).
The paper is organized as follows. In the rest of the introduction we illustrate the range of applicability of GIC through a simple example and we discuss further related work. We finally introduce the
basic notations to be used throughout the paper. Section 2 treats the case of deterministic designs X,
and develops our main results on the basis of the GIC. Section 3 extends our analysis to the case of
random designs. In this case GIC is required to hold for the population covariance, and the analysis
is more technical as it requires to control the randomness of the design matrix. We refer the reader
to the long version of the paper [11] for the proofs of our main results and the technical steps.
1.1
An example
In order to illustrate the range of new cases covered by our results, it is instructive to consider a
simple example. A detailed discussion of this calculation can be found in [11]. The example corresponds to a Gaussian random design, i.e., the rows X1T , . . . XnT are i.i.d. realizations of a p-variate
normal distribution with mean zero. We write Xi = (Xi,1 , Xi,2 , . . . , Xi,p )T for the components of
Xi . The response variable is linearly related to the first s0 covariates:
Yi = ?0,1 Xi,1 + ?0,2 Xi,2 + ? ? ? + ?0,s0 Xi,s0 + Wi ,
2
where Wi ? N(0, ? ) and we assume ?0,i > 0 for all i ? s0 . In particular S = {1, . . . , s0 }.
As for the design matrix, first p ? 1 covariates are orthogonal at the population level, i.e., Xi,j ?
N(0, 1) are independent for 1 ? j ? p ? 1 (and 1 ? i ? n). However the p-th covariate is correlated
to the s0 relevant ones:
? i,p .
Xi,p = a Xi,1 + a Xi,2 + ? ? ? + a Xi,s + b X
0
? i,p ? N(0, 1) is independent from {Xi,1 , . . . , Xi,p?1 } and represents the orthogonal comHere X
ponent of the p-th covariate. We choose the coefficients a, b ? 0 such that s0 a2 + b2 = 1, whence
2
E{Xi,p
} = 1 and hence the p-th covariate is normalized as the first (p ? 1) ones. In other words,
the rows of X are i.i.d. Gaussian Xi ? N(0, ?) with covariance given by
?
?1 if i = j,
?ij = a if i = p, j ? S or i ? S, j = p,
?
0 otherwise.
For a = 0, this is the standard i.i.d. design and irrepresentability
pholds. The Lasso correctly recovers
the support S from n ? c s0 log p samples, provided ?min ? c0 (log p)/n. It follows from [21] that
this remains true as long as a ? (1 ? ?)/s0 for some ? > 0 bounded away from 0. However, as soon
as a > 1/s0 , the Lasso includes the p-th covariate in the estimated model, with high probability.
On the other hand, Gauss-Lasso is successful for a significantly larger set of values of a. Namely, if
1??
1 1??
a ? 0,
?
, ?
,
s0
s0
s0
p
then it recovers S from n ? c s0 log p samples, provided ?min ? c0 (log p)/n. While the interval
((1 ? ?)/s0 , 1/s0 ] is not covered by this result, we expect this to be due to the proof technique rather
than to an intrinsic limitation of the Gauss-Lasso selector.
1.2
Further related work
The restricted isometry property [7, 6] (or the related restricted eigenvalue [2] or compatibility conditions [19]) have been used to establish guarantees on the estimation and model selection errors of
the Lasso or similar approaches. In particular, Bickel, Ritov and Tsybakov [2] show that, under such
conditions, with high probability,
s0 log p
k?b ? ?0 k22 ? C? 2
.
n
4
The same conditions can be used to prove model-selection guarantees. In particular, Zhou [24]
studies a multi-step thresholding procedure whose first steps coincide with the Gauss-Lasso. While
the main objective of this work is to prove high-dimensional `2 consistency with a sparse estimated
model, the author also proves partial model selection guarantees. pNamely, the method correctly
recovers a subset of large coefficients SL ? S, provided |?0,i | ? c? s0 (log p)/n,
? for i ? SL . This
means that the coefficients that are guaranteed to be detected must be a factor s0 larger than what
is required by our results.
An alternative approach to establishing model-selection guarantees assumes a suitable mutual
incoherence conditions. Lounici [13] proves correct model selection under the assumption
b ij | = O(1/s0 ). This assumption is however stronger than irrepresentability [19]. Cand?es
maxi6=j |?
and Plan [4] also assume mutual incoherence, albeit with a much weaker requirement, namely
b ij | = O(1/(log p)). Under this condition, they establish model selection guarantees
maxi6=j |?
p
for an ideal scaling of the non-zero coefficients ?min ? c? (log p)/n. However, this result only
holds with high probability for a ?random signal model? in which the non-zero coefficients ?0,i have
uniformly random signs.
The authors in [22] consider the variable selection problem, and under the same assumptions on
the non-zero coefficients as in the present paper, guarantee support recovery under a cone condition.
The latter condition however is stronger than the generalized irrepresentability condition. In particular, for the example in Section 1.1 it yields no improvement over the standard irrepresentability.
The work [20] studies the adaptive and the thresholded Lasso
p estimators and proves correct model
selection assuming the non-zero coefficients are of order s0 (log p)/n.
Finally, model selection consistency can be obtained without irrepresentability through other methods. For instance [25] develops the adaptive Lasso, using a data-dependent weighted `1 regularization, and [1] proposes the Bolasso, a resampling-based techniques. Unfortunately, both of these
approaches are only guaranteed to succeed in the low-dimensional regime of p fixed, and n ? ?.
1.3
Notations
We provide a brief summary of the notations used throughout the paper. For a matrix A and set of
indices I, J, we let AJ denote the submatrix containing just the columns in J and AI,J denote the
submatrix formed by the rows in I and columns in J. Likewise, for a vector v, vI is the restriction
?1
?1
of v to indices in I. Further, the notation A?1
.
I,I represents the inverse of AI,I , i.e., AI,I = (AI,I )
The maximum and the minimum singular values of A are respectively denoted by ?max (A) and
?min (A). We write kvkp for the standard `p norm of a vector v. Specifically, kvk0 denotes the
number of nonzero entries in v. Also, kAkp refers to the induced operator norm on a matrix A. We
use ei to refer to the i-th standard basis element, e.g., e1 = (1, 0, . . . , 0). For a vector v, supp(v)
represents the positions of nonzero entries of v. Throughout, we denote the rows of the design matrix
X by X1 , . . . , Xn ? Rp and denote its columns by x1 , . . . , xp ? Rn . Further, for a vector v, sign(v)
is the vector with entries sign(v)i = +1 if vi > 0, sign(v)i = ?1 if vi < 0, and sign(v)i = 0
otherwise.
2
Deterministic designs
An outline of this section is as follows: (1) We first consider the zero-noise problem W = 0,
and prove several useful properties of the Lasso estimator in this case. In particular, we show
that there exists a threshold for the regularization parameter below which the support of the Lasso
estimator remains the same and contains supp(?0 ). Moreover, the Lasso estimator support is not
much larger than supp(?0 ). (2) We then turn to the noisy problem, and introduce the generalized
irrepresentability condition (GIC) that is motivated by the properties of the Lasso in the zero-noise
case. We prove that under GIC (and other technical conditions), with high probability, the signed
support of the Lasso estimator is the same as that in the zero-noise problem. (3) We show that the
Gauss-Lasso selector correctly recovers the signed support of ?0 .
5
2.1
Zero-noise problem
b ? (XT X/n) denotes the empirical covariance of the rows of the design matrix. Given
Recall that ?
p?p
b ?R ,?
b 0, ?0 ? Rp and ? ? R+ , we define the zero-noise Lasso estimator as
?
o
n 1
b ? ?0 )i + ?k?k1 .
h(? ? ?0 ), ?(?
(5)
?bZN (?) ? arg minp
??R
2n
Note that ?bZN (?) is obtained by letting Y = X?0 in the definition of ?bn (Y, X; ?).
b
Following [2], we introduce a restricted eigenvalue constant for the empirical covariance matrix ?:
b
hu, ?ui
?
b(s, c0 ) ? min
minp
.
(6)
u?R
kuk22
J?[p]
|J|?s kuJ c k1 ?c0 kuJ k1
Our first result states that supp(?bZN (?)) is not much larger than the support of ?0 , for any ? > 0.
Lemma 2.1. Let ?bZN = ?bZN (?) be defined as per Eq. (5), with ? > 0. Then, if s0 = k?0 k0 ,
!
b 2
4k
?k
ZN
k?b k0 ? 1 +
s0 .
(7)
?
b(s0 , 1)
Lemma 2.2. Let ?bZN = ?bZN (?) be defined as per Eq. (5), with ? > 0. Then there exist ?0 =
b S, ?0 ) > 0, T0 ? [p], v0 ? {?1, 0, +1}p , such that the following happens. For all ? ? (0, ?0 ),
?0 (?,
sign(?bZN (?)) = v0 and supp(?bZN (?)) = supp(v0 ) = T0 . Further T0 ? S, v0,S = sign(?0,S ) and
b ?1 v0,T ]i |.
?0 = mini?S |?0,i /[?
0
T0 ,T0
Finally we have the following standard characterization of the solution of the zero-noise problem.
Lemma 2.3. Let ?bZN = ?bZN (?) be defined as per Eq. (5), with ? > 0. Let T ? S and v ?
{+1, 0, ?1}p be such that supp(v) = T . Then sign(?bZN ) = v if and only if
b
b ?1 vT
(8)
?T c ,T ?
? 1,
T,T
?
b ?1 vT .
vT = sign ?0,T ? ? ?
(9)
T,T
bZN = ?0,T ? ? ?
b ?1 vT .
Further, if the above holds, ?bZN is given by ?bTZN
c = 0 and ?T
T,T
Motivated by this result, we introduce the generalized irrepresentability condition (GIC) for deterministic designs.
b ?0 ), ?
b ? Rp?p , ?0 ? Rp
Generalized irrepresentability (deterministic designs). The pair (?,
satisfy the generalized irrepresentability condition with parameter ? > 0 if the following happens.
Let v0 , T0 be defined as per Lemma 2.2. Then
b c b ?1
(10)
?T0 ,T0 ?T0 ,T0 v0,T0
? 1 ? ? .
?
In other words we require the dual feasibility condition (8), which always holds, to hold with a
positive slack ?.
2.2
Noisy problem
Consider the noisy linear observation model as described in (2), and let rb ? (XT W/n). We begin
with a standard characterization of sign(?bn ), the signed support of the Lasso estimator (3).
Lemma 2.4. Let ?bn = ?bn (y, X; ?) be defined as per Eq. (3), and let z ? {+1, 0, ?1}p with
supp(z) = T . Further assume T ? S. Then the signed support of the Lasso estimator is given by
sign(?bn ) = z if and only if
b
b ?1 zT + 1 (b
b T c ,T ?
b ?1 rbT )
rT c ? ?
(11)
? 1,
?T c ,T ?
T,T
T,T
?
?
b ?1 (?zT ? rbT ) .
zT = sign ?0,T ? ?
(12)
T,T
6
b ?
Theorem 2.5. Consider the deterministic design model with empirical covariance matrix ?
T
p
b
(X X)/n, and assume ?i,i ? 1 for i ? [p]. Let T0 ? [p], v0 ? {+1, 0, ?1} be the set and
b T ,T ) ? Cmin > 0. (ii) The pair (?,
b ?0 )
vector defined in Lemma 2.2. Assume that (i) ?min (?
0
0
satisfies the generalized irrepresentability condition with parameter
?. Consider the Lasso estimator
p
?bn = ?bn (y, X; ?) defined as per Eq. (3), with ? = (?/?) 2c1 log p/n, for some constant c1 > 1,
and suppose that for some c2 > 0:
?1
b
for all i ? S,
(13)
|?0,i | ? c2 ? + ?[?
T0 ,T0 v0,T0 ]i
?1
b
[?
for all i ? T0 \ S.
(14)
T0 ,T0 v0,T0 ]i ? c2
?
We further assume, without loss of generality, ? ? c2 Cmin . Then the following holds true:
n
o
P sign(?bn (?)) = v0 ? 1 ? 4p1?c1 .
(15)
Note that even under standard irrepresentability, this result improves over [21, Theorem 1.(b)], in
b ?1 k? .
that the required lower bound for |?0,i |, i ? S, does not depend on k?
S,S
b T ,T to have minimum singular
Remark 2.6. Condition (i) in Theorem 2.5 requires the submatrix ?
0
0
b
value bounded away form zero. Assuming ?S,S to be non-singular is necessary for identifiability.
b T ,T to be bounded away from zero is not much more
Requiring the minimum singular value of ?
0
0
restrictive since T0 is comparable in size with S, as stated in Lemma 2.1.
We next show that the Gauss-Lasso selector correctly recovers the support of ?0 .
b ?
Theorem 2.7. Consider the deterministic design model with empirical covariance matrix ?
T
b
(X X)/n, and assume that ?i,i ? 1 for i ? [p]. Under the assumptions of Theorem 2.5,
2
2
P k?bGL ? ?0 k? ? ? ? 4p1?c1 + 2pe?nCmin ? /2? .
In particular, if Sb is the model selected by the Gauss-Lasso, then P(Sb = S) ? 1 ? 6 p1?c1 /4 .
3
Random Gaussian designs
In the previous section, we studied the case of deterministic design models which allowed for a
straightforward analysis. Here, we consider the random design model which needs a more involved
analysis. Within the random Gaussian design model, the rows Xi are distributed as Xi ? N(0, ?)
for some (unknown) covariance matrix ? 0. In order to study the performance of Gauss-Lasso
selector in this case, we first define the population-level estimator. Given ? ? Rp?p , ? 0,
?0 ? Rp and ? ? R+ , the population-level estimator ?b? (?) = ?b? (?; ?0 , ?) is defined as
o
n1
?b? (?) ? arg minp
h(? ? ?0 ), ?(? ? ?0 )i + ?k?k1 .
(16)
??R
2
In fact, the population-level estimator is obtained by assuming that the response vector Y is noiseless
and n = ?, hence replacing the empirical covariance (XT X/n) with the exact covariance ? in the
lasso optimization problem (3). Note that the population-level estimator ?b? is deterministic, albeit
X is a random design. We show that under some conditions on the covariance ? and vector ?0 ,
T ? supp(?bn ) = supp(?b? ), i.e., the population-level estimator and the Lasso estimator share
the same (signed) support. Further T ? S. Since ?b? (and hence T ) is deterministic, XT is a
Gaussian matrix with rows drawn independently from N(0, ?T,T ). This observation allows for a
simple analysis of the Gauss-Lasso selector ?bGL .
An outline of the section is as follows: (1) We begin with noting that the population-level estimator
?b? (?) has the similar properties to ?bZN (?) stated in Section 2.1. In particular, there exists a threshold
?0 , such that for all ? ? (0, ?0 ), supp(?b? (?)) remains the same and contains supp(?0 ). Moreover,
supp(?b? (?)) is not much larger than supp(?0 ). (2) We show that under GIC for covariance matrix
? (and other sufficient conditions), with high probability, the signed support of the Lasso estimator
is the same as the signed support of the population-level estimator. (3) Following the previous steps,
we show that the Gauss-Lasso selector correctly recovers the signed support of ?0 .
7
3.1
The n = ? problem
Comparing Eqs. (5) and (16), the estimators ?bZN (?) and ?b? (?) are defined in a very similar manner
b and the latter is defined with respect to ?). It is easy to see
(the former is defined with respect to ?
?
b
b with ?.
that ? satisfies the properties stated in Section 2.1 once we replace ?
3.2
The high-dimensional problem
b ? (XT X)/n and rb ? (XT W )/n.
We now consider the Lasso estimator (3). Recall the notations ?
p?p
p
b
Note that ? ? R , rb ? R are both random quantities in the case of random designs.
Theorem 3.1. Consider the Gaussian random design model with covariance matrix ? 0,
and assume that ?i,i ? 1 for i ? [p]. Let T0 ? [p], v0 ? {+1, 0, ?1}p be the determinisb with ?), and t0 ? |T0 |. Assume that (i)
tic set and vector defined in Lemma 2.2 (replacing ?
?min (?T0 ,T0 ) ? Cmin > 0. (ii) The pair (?, ?0 ) satisfies the generalized irrepresentability condition with parameter
?. Consider the Lasso estimator ?bn = ?bn (y, X; ?) defined as per Eq. (3), with
p
? = (4?/?) c1 log p/n, for some constant c1 > 1, and suppose that for some c2 > 0:
3
|?0,i | ? c2 ? + ?[??1
for all i ? S,
T0 ,T0 v0,T0 ]i
2
?1
[?
for all i ? T0 \ S.
T0 ,T0 v0,T0 ]i ? 2c2
?
We further assume, without loss of generality, ? ? c2 Cmin . If n ? max(M1 , M2 )t0 log p
M1 ? (74c1 )/(? 2 Cmin ), and M2 ? c1 (32/(c2 Cmin ))2 , then the following holds true:
n
o
t0
n
P sign(?bn (?)) = v0 ? 1 ? pe? 10 ? 6e? 2 ? 8p1?c1 .
(17)
(18)
with
(19)
Note that even under standard irrepresentability, this result improves over [21, Theorem 3.(ii)], in
?1/2
that the required lower bound for |?0,i |, i ? S, does not depend on k?S,S k? .
Remark 3.2. Condition (i) follows readily from the restricted eigenvalue constraint, i.e.,
?? (t0 , 0) > 0. This is a reasonable assumption since T0 is not much larger than S0 , as stated
b with ?). Namely, s0 ? t0 ? (1 + 4k?k2 /?(s0 , 1))s0 .
in Lemma 2.1 (replacing ?
Below, we show that the Gauss-Lasso selector correctly recovers the signed support of ?0 .
Theorem 3.3. Consider the random Gaussian design model with covariance matrix ? 0,
and assume that ?i,i ? 1 for i ? [p]. Under the assumptions of Theorem 3.1, and for
n ? max(M1 , M2 )t0 log p, we have
s0
2
2
n
P k?bGL ? ?0 k? ? ? ? pe? 10 + 6e? 2 + 8p1?c1 + 2pe?nCmin ? /2? .
Moreover, letting S? be the model returned by the Gauss-Lasso selector, we have
n
P(Sb = S) ? 1 ? p e? 10 ? 6 e?
s0
2
? 10 p1?c1 .
Remark 3.4. [Detection level] Let ?min ? mini?S |?0,i | be the minimum magnitude of the nonzero entries of vector ?0 . By Theorem 3.3, Gauss-Lasso selector correctly recovers supp(?0 ), with
s0
n
probability greater than 1 ? p e? 10 ? 6 e? 2 ? 10 p1?c1 , if n ? max(M1 , M2 )t0 log p, and
r
log p
?min ? C?
1 + k??1
(20)
T0 ,T0 k? ,
n
for some constant C. Note that Eq. (20) follows from Eqs. (17) and (18).
8
References
[1] F. R. Bach. Bolasso: model consistent lasso estimation through the bootstrap. In Proceedings of the 25th
international conference on Machine learning, pages 33?40. ACM, 2008.
[2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Amer. J.
of Mathematics, 37:1705?1732, 2009.
[3] P. B?uhlmann and S. van de Geer. Statistics for high-dimensional data. Springer-Verlag, 2011.
[4] E. Cand`es and Y. Plan. Near-ideal model selection by `1 minimization. The Annals of Statistics,
37(5A):2145?2177, 2009.
[5] E. Candes, J. K. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. on Inform. Theory, 52:489 ? 509, 2006.
[6] E. Cand?es and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals
of Statistics, 35:2313?2351, 2007.
[7] E. J. Cand?es and T. Tao. Decoding by linear programming. IEEE Trans. on Inform. Theory, 51:4203?
4215, 2005.
[8] S. Chen and D. Donoho. Examples of basis pursuit. In Proceedings of Wavelet Applications in Signal and
Image Processing III, San Diego, CA, 1995.
[9] D. L. Donoho. Compressed sensing. IEEE Trans. on Inform. Theory, 52:489?509, April 2006.
[10] A. Javanmard and A. Montanari. Hypothesis testing in high-dimensional regression under the gaussian
random design model: Asymptotic theory. arXiv preprint arXiv:1301.4240, 2013.
[11] A. Javanmard and A. Montanari. Model selection for high-dimensional regression under the generalized
irrepresentability condition. arXiv:1305.0355, 2013.
[12] K. Knight and W. Fu. Asymptotics for lasso-type estimators. Annals of Statistics, pages 1356?1378,
2000.
[13] K. Lounici. Sup-norm convergence rate and sign concentration property of lasso and dantzig estimators.
Electronic Journal of statistics, 2:90?102, 2008.
[14] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the lasso.
Ann. Statist., 34:1436?1462, 2006.
[15] J. Peng, J. Zhu, A. Bergamaschi, W. Han, D.-Y. Noh, J. R. Pollack, and P. Wang. Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast
cancer. The Annals of Applied Statistics, 4(1):53?77, 2010.
[16] S. K. Shevade and S. S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic
regression. Bioinformatics, 19(17):2246?2253, 2003.
[17] R. Tibshirani. Regression shrinkage and selection with the Lasso. J. Royal. Statist. Soc B, 58:267?288,
1996.
[18] R. J. Tibshirani. The lasso problem and uniqueness. Electronic Journal of Statistics, 7:1456?1490, 2013.
[19] S. van de Geer and P. B?uhlmann. On the conditions used to prove oracle results for the lasso. Electron. J.
Statist., 3:1360?1392, 2009.
[20] S. van de Geer, P. B?uhlmann, and S. Zhou. The adaptive and the thresholded Lasso for potentially
misspecified models (and a lower bound for the Lasso). Electron. J. Stat., 5:688?749, 2011.
[21] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained
quadratic programming. IEEE Trans. on Inform. Theory, 55:2183?2202, 2009.
[22] F. Ye and C.-H. Zhang. Rate minimaxity of the lasso and dantzig selector for the lq loss in lr balls.
Journal of Machine Learning Research, 11:3519?3540, 2010.
[23] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning Research,
7:2541?2563, 2006.
[24] S. Zhou. Thresholded Lasso for high dimensional variable selection and statistical estimation.
arXiv:1002.1583v2, 2010.
[25] H. Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association,
101(476):1418?1429, 2006.
9
| 4930 |@word cu:2 version:1 stronger:2 norm:3 c0:4 hu:1 integrative:1 bn:17 covariance:15 contains:2 denoting:1 comparing:1 must:3 readily:1 resampling:1 selected:2 lr:1 characterization:2 provides:1 clarified:1 zhang:1 c2:9 prove:9 kuj:2 manner:1 introduce:4 peng:1 javanmard:3 indeed:2 andrea:1 cand:4 p1:7 roughly:2 multi:1 provided:3 begin:2 bounded:5 notation:5 moreover:3 what:2 tying:1 tic:1 substantially:2 guarantee:8 k2:1 control:1 omit:1 yn:2 positive:1 understood:1 treat:1 limit:2 establishing:1 incoherence:2 signed:9 plus:1 au:1 dantzig:5 quantified:1 meinshausen:2 studied:1 range:2 unique:1 practical:1 testing:1 bootstrap:1 procedure:3 asymptotics:1 empirical:7 significantly:4 convenient:1 word:4 confidence:1 refers:1 cannot:1 onto:1 selection:25 operator:1 romberg:1 context:1 restriction:1 deterministic:10 straightforward:1 independently:2 convex:1 focused:1 formulate:2 simplicity:1 identifying:3 formalized:1 recovery:3 m2:4 estimator:27 proving:1 population:9 annals:4 diego:1 suppose:2 strengthen:1 exact:3 programming:2 us:1 hypothesis:2 element:1 preprint:1 wang:1 knight:1 intuition:1 ui:2 covariates:7 asked:1 depend:2 tight:1 bergamaschi:1 basis:3 k0:2 detected:3 ponent:1 outside:1 h0:1 whose:1 stanford:6 larger:7 otherwise:2 compressed:1 statistic:7 itself:1 noisy:4 eigenvalue:4 reconstruction:1 product:1 relevant:2 realization:3 ky:2 x1t:2 convergence:1 requirement:2 nin:1 maxi6:2 illustrate:2 stat:1 pose:1 ij:3 eq:10 solves:1 soc:1 direction:1 closely:1 correct:5 cmin:6 explains:1 require:2 argued:1 strictly:2 hold:12 normal:1 electron:2 bickel:2 early:2 a2:1 purpose:1 uniqueness:1 estimation:4 uhlmann:4 largest:1 weighted:1 minimization:1 gaussian:15 always:3 rather:1 zhou:3 shrinkage:1 broader:1 focus:1 improvement:1 detect:1 whence:1 dependent:1 sb:5 prescribing:1 interested:1 tao:3 compatibility:1 arg:4 dual:1 noh:1 denoted:2 development:2 plan:2 proposes:1 breakthrough:1 constrained:1 mutual:2 construct:2 never:1 once:1 represents:3 broad:2 yu:2 nearly:2 fundamentally:1 develops:2 modern:1 keerthi:1 n1:1 detection:1 highly:1 fu:1 partial:1 necessary:3 orthogonal:5 unless:2 incomplete:1 theoretical:1 pollack:1 instance:3 column:6 earlier:1 cover:1 zn:1 ordinary:2 cost:1 applicability:1 subset:2 entry:11 predictor:1 successful:1 considerably:1 thanks:1 fundamental:1 international:1 decoding:1 satisfied:1 containing:1 choose:1 american:1 zhao:2 supp:20 de:3 summarized:1 b2:1 includes:1 coefficient:12 satisfy:3 kvkp:1 vi:3 view:1 sup:1 candes:1 identifiability:1 contribution:1 square:4 formed:1 largely:1 likewise:1 yield:1 identify:2 rbt:2 worth:1 randomness:1 simultaneous:1 inform:4 definition:1 frequency:1 involved:1 proof:2 recovers:9 proved:2 treatment:1 popular:1 ask:1 recall:2 lim:1 gic:12 improves:2 organized:1 response:4 improved:1 april:1 formulation:3 ritov:2 lounici:2 amer:1 generality:2 just:1 stage:1 shevade:1 hand:3 replacing:4 ei:1 logistic:1 aj:1 modulus:1 ye:1 k22:4 y2:1 true:6 requiring:2 former:2 regularization:3 hence:3 normalized:1 bzn:19 nonzero:3 bgl:4 adelj:1 generalized:17 outline:2 performs:1 ranging:1 image:1 misspecified:1 association:1 he:1 m1:4 refer:4 measurement:2 ai:4 consistency:5 mathematics:1 han:1 v0:15 multivariate:1 isometry:1 recent:1 irrelevant:1 irrepresentability:38 verlag:1 inequality:1 arbitrarily:1 vt:7 yi:3 minimum:7 greater:1 somewhat:1 signal:4 ii:4 reduces:1 ing:1 asso:1 exceeds:1 technical:3 calculation:1 bach:1 long:2 prescription:1 e1:1 specializing:1 feasibility:1 regression:10 basic:1 breast:1 noiseless:1 expectation:1 arxiv:4 achieved:1 c1:13 addition:1 want:1 interval:1 singular:4 crucial:1 rest:2 induced:1 call:1 near:1 noting:2 ideal:2 iii:1 superset:5 easy:1 variate:1 lasso:62 identified:1 t0:41 whether:1 motivated:3 adel:1 returned:1 remark:4 useful:1 clear:1 covered:2 detailed:1 tsybakov:2 statist:3 sl:2 exist:1 notice:1 sign:22 estimated:4 correctly:9 per:7 rb:3 tibshirani:2 write:2 shall:1 bolasso:2 threshold:3 drawn:2 thresholded:3 montanar:1 graph:1 relaxation:1 cone:1 inverse:1 uncertainty:1 master:1 extends:1 family:1 throughout:3 reader:1 reasonable:1 electronic:2 scaling:4 comparable:2 submatrix:4 bound:6 guaranteed:3 distinguish:1 existed:1 quadratic:1 oracle:2 constraint:1 x2:1 sake:1 aspect:1 argument:1 min:12 ball:1 smaller:3 wi:4 happens:2 kakp:1 restricted:6 computationally:4 remains:3 slack:2 discus:1 turn:1 letting:4 pursuit:1 away:5 appropriate:1 v2:1 alternative:1 rp:8 assumes:2 denotes:2 cf:1 graphical:1 ncmin:2 restrictive:1 k1:8 build:1 establish:3 prof:3 objective:1 question:2 quantity:1 concentration:1 rt:1 unclear:1 reason:1 assuming:3 index:4 mini:3 nc:1 unfortunately:1 potentially:1 stated:5 xnt:2 design:33 zt:3 unknown:3 allowing:1 upper:1 observation:2 precise:1 y1:2 rn:1 kvk0:1 sharp:1 pair:4 namely:7 required:5 established:2 trans:4 below:4 regime:2 sparsity:2 including:1 max:4 royal:1 wainwright:2 power:1 suitable:1 regularized:3 zhu:1 minimax:1 brief:1 imply:1 minimaxity:1 genomics:2 literature:1 asymptotic:1 loss:3 expect:1 interesting:1 limitation:1 degree:1 sufficient:2 xp:1 s0:42 minp:5 thresholding:1 consistent:1 principle:1 share:1 row:8 cancer:1 summary:1 gl:2 soon:1 side:2 weaker:4 understand:1 taking:1 sparse:3 distributed:1 van:3 xn:2 forward:1 author:3 adaptive:4 coincide:1 san:1 selector:17 gene:1 active:6 kkt:2 assumed:1 xi:22 continuous:1 why:2 reasonably:1 robust:1 ca:3 zou:1 main:3 montanari:3 linearly:2 noise:13 allowed:1 x1:3 body:1 strengthened:1 fails:1 position:2 kuk22:1 wish:1 lq:1 pe:4 wavelet:1 theorem:10 xt:10 specific:4 covariate:4 sensing:1 exists:4 intrinsic:1 albeit:2 magnitude:1 kx:1 demand:1 chen:1 explore:1 contained:1 scalar:1 springer:1 corresponds:1 minimizer:3 satisfies:3 acm:1 succeed:1 donoho:2 ann:1 replace:1 specifically:1 uniformly:2 justify:1 lemma:10 called:3 geer:3 gauss:17 e:4 succeeds:2 formally:1 support:25 latter:2 relevance:1 bioinformatics:1 instructive:1 correlated:1 |
4,344 | 4,931 | Confidence Intervals and Hypothesis Testing for
High-Dimensional Statistical Models
Andrea Montanari
Stanford University
Stanford, CA 94305
[email protected]
Adel Javanmard
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Fitting high-dimensional statistical models often requires the use of non-linear
parameter estimation procedures. As a consequence, it is generally impossible to
obtain an exact characterization of the probability distribution of the parameter
estimates. This in turn implies that it is extremely challenging to quantify the uncertainty associated with a certain parameter estimate. Concretely, no commonly
accepted procedure exists for computing classical measures of uncertainty and
statistical significance as confidence intervals or p-values.
We consider here a broad class of regression problems, and propose an efficient
algorithm for constructing confidence intervals and p-values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that
a certain parameter is vanishing, our method has nearly optimal power.
Our approach is based on constructing a ?de-biased? version of regularized Mestimators. The new construction improves over recent work in the field in that it
does not assume a special structure on the design matrix. Furthermore, proofs are
remarkably simple. We test our method on a diabetes prediction problem.
1
Introduction
It is widely recognized that modern statistical problems are increasingly high-dimensional, i.e. require estimation of more parameters than the number of observations/examples. Examples abound
from signal processing [16], to genomics [21], collaborative filtering [12] and so on. A number
of successful estimation techniques have been developed over the last ten years to tackle these
problems. A widely applicable approach consists in optimizing a suitably regularized likelihood
function. Such estimators are, by necessity, non-linear and non-explicit (they are solution of certain
optimization problems).
The use of non-linear parameter estimators comes at a price. In general, it is impossible to characterize the distribution of the estimator. This situation is very different from the one of classical
statistics in which either exact characterizations are available, or asymptotically exact ones can be
derived from large sample theory [26]. This has an important and very concrete consequence. In
classical statistics, generic and well accepted procedures are available for characterizing the uncertainty associated to a certain parameter estimate in terms of confidence intervals or p-values [28, 14].
However, no analogous procedures exist in high-dimensional statistics.
In this paper we develop a computationally efficient procedure for constructing confidence intervals
and p-values for a broad class of high-dimensional regression problems. The salient features of
our procedure are: (i) Our approach guarantees nearly optimal confidence interval sizes and testing
power. (ii) It is the first one that achieves this goal under essentially no assumptions on the population covariance matrix of the parameters, beyond the standard conditions for high-dimensional
consistency. (iii) It allows for a streamlined analysis with respect to earlier work in the same area.
1
Table 1: Unbiased estimator for ?0 in high dimensional linear regression models
Input: Measurement vector y, design matrix X, parameter ?.
Output: Unbiased estimator ?bu .
1: Set ? = ??, and let ?bn be the Lasso estimator as per Eq. (3).
b ? (XT X)/n.
2: Set ?
3: for i = 1, 2, . . . , p do
4:
Let mi be a solution of the convex program:
b
mT ?m
b ? ei k? ? ?
subject to k?m
minimize
(4)
5: Set M = (m1 , . . . , mp )T . If any of the above problems is not feasible, then set M = Ip?p .
6: Define the estimator ?bu as follows:
1
?bu = ?bn (?) + M XT (Y ? X?bn (?))
n
(5)
(iv) Our method has a natural generalization non-linear regression models (e.g. logistic regression, see Section 4). We provide heuristic and numerical evidence supporting this generalization,
deferring a rigorous study to future work.
For the sake of clarity, we will focus our presentation on the case of linear regression, deferring the generalization to Section 4. In the random design model, we are given n i.i.d. pairs
(Y1 , X1 ), (Y2 , X2 ), . . . , (Yn , Xn ), with vectors Xi ? Rp and response variables Yi given by
Wi ? N(0, ? 2 ) .
Yi = h?0 , Xi i + Wi ,
(1)
Here h ? , ? i is the standard scalar product in Rp . In matrix form, letting Y = (Y1 , . . . , Yn )T and
denoting by X the design matrix with rows X1T , . . . , XnT , we have
Y = X ?0 + W ,
W ? N(0, ? 2 In?n ) .
(2)
p
The goal is estimate the unknown (but fixed) vector of parameters ?0 ? R .
In the classic setting, n p and the estimation method of choice is ordinary least squares yielding
?bOLS = (XT X)?1 XT Y . In particular ?b is Gaussian with mean ?0 and covariance ? 2 (XT X)?1 .
This directly allows to construct confidence intervals1 .
In the high-dimensional setting where p > n, the matrix (XT X) is rank deficient and one has to
resort to biased estimators. A particularly successful approach is the Lasso [24, 7] which promotes
sparse reconstructions through an `1 penalty.
o
n 1
kY ? X?k22 + ?k?k1 .
(3)
?bn (Y, X; ?) ? arg minp
??R
2n
In case the right hand side has more than one minimizer, one of them can be selected arbitrarily for
our purposes. We will often omit the arguments Y , X, as they are clear from the context. We denote
by S ? supp(?0 ) ? [p] the support of ?0 , and let s0 ? |S|. A copious theoretical literature [6, 2, 4]
shows that, under suitable assumptions on X, the Lasso is nearly as accurate as if the support S was
known a priori. Namely, for n = ?(s0 log p), we have k?bn ? ?0 k22 = O(s0 ? 2 (log p)/n). These
remarkable properties come at a price. Deriving an exact characterization for the distribution of ?bn
is not tractable in general, and hence there is no simple procedure to construct confidence intervals
and p-values. In order to overcome this challenge, we construct a de-biased estimator from the Lasso
solution. The de-biased estimator is given by the simple formula ?bu = ?bn +(1/n) M XT (Y ?X?bn ),
as in Eq. (5). The basic intuition is that XT (Y ? X?bn )/(n?) is a subgradient of the `1 norm at the
Lasso solution ?bn . By adding a term proportional to this subgradient, our procedure compensates
the bias introduced by the `1 penalty in the Lasso.
For instance, letting Q ? (XT X/n)?1 , ?biOLS ? 1.96?
dence interval [28].
1
2
p
Qii /n, ?biOLS + 1.96?
p
Qii /n] is a 95% confi-
We will prove in Section 2 that ?bu is approximately Gaussian, with mean ?0 and covariance
b )/n, where ?
b = (XT X/n) is the empirical covariance of the feature vectors. This result
? 2 (M ?M
allows to construct confidence intervals and p-values in complete
analogy withp
classical statistics
p
u
u
b
b
b
procedures. For instance, letting Q ? M ?M , [?i ? 1.96?
? Qii /n, ?i + 1.96? Qii /n] is a 95%
confidence interval. The size of this interval is of order ?/ n, which is the optimal (minimum) one,
i.e. the same that would have been obtained by knowing a priori the support of ?0 . In practice the
noise standard deviation is not known, but ? can be replaced by any consistent estimator ?
b.
A key role is played by the matrix M ? Rp?p whose function is to ?decorrelate? the columns of X.
We propose here to construct M by solving a convex program that aims at optimizing two objectives.
b ? I|? (here and below | ? |? denotes the entrywise `? norm)
One one hand, we try to control |M ?
which ?as shown in Theorem 2.1? controls the non-Gaussianity and bias of ?bu . On the other, we
b ]i,i , for each i ? [p], which controls the variance of ?bu .
minimize [M ?M
i
The idea of constructing a de-biased estimator of the form ?bu = ?bn + (1/n) M XT (Y ? X?bn )
was used by Javanmard and Montanari in [10], that suggested the choice M = c??1 , with ? =
E{X1 X1T } the population covariance matrix and c a positive constant. A simple estimator for ?
was proposed for sparse covariances, but asymptotic validity and optimality were proven only for
uncorrelated Gaussian designs (i.e. Gaussian X with ? = I). Van de Geer, B?ulhmann and Ritov
[25] used the same construction with M an estimate of ??1 which is appropriate for sparse inverse
covariances. These authors prove semi-parametric optimality in a non-asymptotic setting, provided
the sample size is at least n = ?(s20 log p). In this paper, we do not assume any sparsity constraint on
??1 , but still require the sample size scaling n = ?(s20 log p). We refer to a forthcoming publication
wherein the condition on the sample size scaling is relaxed [11].
From a technical point of view, our proof starts from a simple decomposition of the de-biased estimator ?bu into a Gaussian part and an error term, already used in [25]. However ?departing radically
from earlier work? we realize that M need not be a good estimator of ??1 in order for the de-biasing
procedure to work. We instead set M as to minimize the error term and the variance of the Gaussian
term. As a consequence of this choice, our approach applies to general covariance structures ?. By
contrast, earlier approaches applied only to sparse ?, as in [10], or sparse ??1 as in [25]. The only
assumptions we make on ? are the standard compatibility conditions required for high-dimensional
consistency [4]. We refer the reader to the long version of the paper [9] for the proofs of our main
results and the technical steps.
1.1
Further related work
The theoretical literature on high-dimensional statistical models is vast and rapidly growing. Restricting ourselves to linear regression, earlier work investigated prediction error [8], model selection properties [17, 31, 27, 5], `2 consistency [6, 2]. Of necessity, we do not provide a complete set
of references, and instead refer the reader to [4] for an in-depth introduction to this area.
The problem of quantifying statistical significance in high-dimensional parameter estimation is, by
comparison, far less understood. Zhang and Zhang [30], and B?uhlmann [3] proposed hypothesis
testing procedures under restricted eigenvalue or compatibility conditions [4]. These methods are
however effective only ?
for detecting very
? large coefficients. Namely, they both require |?0,i | ?
c max{?s0 log p/ n, ?/ n}, which is s0 larger than the ideal detection level [10]. In other words,
in order for the coefficient ?0,i to be detectable with appreciable probability, it needs to be larger than
the overall `2 error, rather than the `2 error per coordinate.
Lockart et al. [15] develop a test for the hypothesis that a newly added coefficient along the Lasso
regularization path is irrelevant. This however does not allow to test arbitrary coefficients at a given
value of ?, which is instead the problem addressed in this paper. It further assumes that the current
Lasso support contains the actual support supp(?0 ) and that the latter has bounded size. Finally,
resampling methods for hypothesis testing were studied in [29, 18, 19].
1.2
Preliminaries and notations
b ? XT X/n be the sample covariance matrix. For p > n, ?
b is always singular. However,
We let ?
b
we may require ? to be nonsingular for a restricted set of directions.
3
b and a set S of size s0 , the compatibility condition is met, if for some
Definition 1.1. For a matrix ?
?0 > 0, and all ? satisfying k?S c k1 ? 3k?S k1 , it holds that
k?S k21 ?
s0 T b
? ?? .
?20
Definition 1.2. The sub-gaussian norm of a random variable X, denoted by kXk?2 , is defined as
kXk?2 = sup p?1/2 (E|X|p )1/p .
p?1
The sub-gaussian norm of a random vector X ? Rn is defined as kXk?2 = supx?S n?1 khX, xik?2 .
Further, for a random variable X, its sub-exponential norm, denoted by kXk?1 , is defined as
kXk?1 = sup p?1 (E|X|p )1/p .
p?1
For a matrix A and set of indices I, J, we let AI,J denote the submatrix formed by the rows in
I and columns in J. Also, AI,? (resp. A?,I ) denotes the submatrix containing just the rows (reps.
columns) in I. Likewise, for a vector v, vI is the restriction of v to indices in I. We use the shorthand
?1
?1
A?1
)I,J . In particular, A?1
)i,i . The maximum and the minimum singular values
i,i = (A
I,J = (A
of A are respectively denoted by ?max (A) and ?min (A). We write kvkp for the standard `p norm of
a vector v and kvk0 for the number of nonzero entries of v. For a matrix A, kAkp is the `p operator
P
norm, and |A|p is the elementwise `p norm, i.e., |A|p = ( i,j |Aij |p )1/p . For an integer p ? 1,
we let [p] ? {1, . . . , p}. For a vector v, supp(v) represents the positions of nonzero entries of v.
Throughout,
(w.h.p) means with probability converging to one as n ? ?, and
?
R x with2high probability
?(x) ? ?? e?t /2 dt/ 2? denotes the CDF of the standard normal distribution.
2
An de-biased estimator for ?0
Theorem 2.1. Consider the linear model (1) and let ?bu be defined as per Eq. (5). Then,
? u
?
b .
b T ) , ? = n(M ?
b ? I)(?0 ? ?)
n(?b ? ?0 ) = Z + ? , Z|X ? N(0, ? 2 M ?M
Further, suppose that ?min (?) = ?(1), and ?max (?) = O(1). In addition assume the rows of the
whitened matrix X??1/2 are sub-gaussian, i.e., k??1/2 X1 k?2 = O(1). Let E be the event that the
p
b i,i = O(1). Then, using ? = O( (log p)/n)
b and maxi?[p] ?
compatibility condition holds for ?,
?
(see inputs in Table 1), the following holds true. On the event E, w.h.p, k?k? = O(s0 log p/ n).
Note that compatibility condition (and hence the event E) holds w.h.p. for random design matrices
of a general nature. In fact [22] shows that under some general assumptions, the compatibility
b w.h.p., when n is sufficiently large. Bounds on
condition on ? implies a similar condition on ?,
T
b
the variances [M ?M ]ii will be given in Section 3.2. Finally, the claim of Theorem 2.1 does not
rely on the specific choice of the objective function in optimization problem (4) and only uses the
optimization constraints.
Remark 2.2. Theorem 2.1 does not make any assumption
? about the parameter vector ?0 . If we
further assume that the support size s0 satisfies s0 = o( n/ log p), then we have k?k? = o(1),
w.h.p. Hence, ?bu is an asymptotically unbiased estimator for ?0 .
3
Statistical inference
A direct application of Theorem 2.1 is to derive confidence intervals and statistical hypothesis
tests
?
for high dimensional models. Throughout, we make the sparsity assumption s0 = o( n/ log p).
3.1
Confidence intervals
We first show that the variances of variables Zj |X are ?(1).
4
Lemma 3.1. Let M = (m1 , . . . , mp )T be the matrix with rows mT
i obtained by solving convex
T
2 b
b
program (4). Then for all i ? [p], [M ?M ]i,i ? (1 ? ?) /?i,i .
By Remark 2.2 and Lemma 3.1, we have
o
n ?n(?bu ? ? )
0,i
i
P
? xX = ?(x) + o(1) ,
1/2
T
b
?[M ?M ]i,i
?x ? R .
(6)
Since the limiting probability is independent of X, Eq. (6) also holds unconditionally for random
design X.
For constructing confidence intervals, a consistent estimate of ? is needed. To this end, we use the
scaled Lasso [23] given by
n 1
o
?
{?bn (?), ?
b} ? arg min
kY ? X?k22 + + ?k?k1 .
2
??Rp ,?>0 2?n
This is a joint convex minimization
which
provides
an
estimate
of
the noise level in addition to an
p
b, under the assumptions
estimate of ?0 . We use ? = c1 (log p)/n that yields a consistent estimate ?
of Theorem 2.1 (cf. [23]). We hence obtain the following.
Corollary 3.2. Let
q
?1
?1/2
b T ]i,i .
?(?, n) = ? (1 ? ?/2)b
?n
[M ?M
(7)
Then Ii = [?biu ? ?(?, n), ?biu + ?(?, n)] is an asymptotic two-sided confidence interval for ?0,i with
significance ?.
Notice that the same corollary applies to any other consistent estimator ?
b of the noise standard
deviation.
3.2
Hypothesis testing
An important advantage of sparse linear regression models is that they provide parsimonious explanations of the data in terms of a small number of covariates. The easiest way to select the ?active?
covariates is to choose the indexes i for which ?bin 6= 0. This approach however does not provide a
measure of statistical significance for the finding that the coefficient is non-zero.
More precisely, we are interested in testing an individual null hypothesis H0,i : ?0,i = 0 versus the
alternative HA,i : ?0,i 6= 0, and assigning p-values for these tests. We construct a p-value Pi for the
test H0,i as follows:
? bu
n |?i |
.
(8)
Pi = 2 1 ? ?
b T ]1/2
?
b[M ?M
i,i
The decision rule is then based on the p-value Pi :
1 if Pi ? ?
Ti,X (y) =
0 otherwise
(reject H0,i ) ,
(accept H0,i ) .
(9)
We measure the quality of the test Ti,X (y) in terms of its significance level ?i and statistical power
1 ? ?i . Here ?i is the probability of type I error (i.e. of a false positive at i) and ?i is the probability
of type II error (i.e. of a false negative at i).
Note that it is important to consider the tradeoff between statistical significance and power. Indeed
any significance level ? can be achieved by randomly rejecting H0,i with probability ?. This test
achieves power 1 ? ? = ?. Further note that, without further assumption, no nontrivial power can
be achieved. In fact, choosing ?0,i 6= 0 arbitrarily close to zero, H0,i becomes indistinguishable
from its alternative. We will therefore assume that, whenever ?0,i 6= 0, we have |?0,i | > ? as well.
We take a minimax perspective and require the test to behave uniformly well over s0 -sparse vectors.
Formally, for ? > 0 and i ? [p], define
n
o
?i (n) ? sup P?0 (Ti,X (y) = 1) : ?0 ? Rp , k?0 k0 ? s0 (n), ?0,i = 0 .
n
o
?i (n; ?) ? sup P?0 (Ti,X (y) = 0) : ?0 ? Rp , k?0 k0 ? s0 (n), |?0,i | ? ? .
5
Here, we made dependence on n explicit. Also, P? (?) is the induced probability for random design
X and noise realization w, given the fixed parameter vector ?. Our next theorem establishes bounds
on ?i (n) and ?i (n; ?).
Theorem 3.3. Consider a random?
design model that satisfies the conditions of Theorem 2.1. Under
the sparsity assumption s0 = o( n/ log p), the following holds true for any fixed sequence of
integers i = i(n):
lim ?i (n) ? ? .
(10)
n??
lim
n??
1 ? ?i (?; n)
? 1,
1 ? ?i? (?; n)
1 ? ?i? (?; n) ? G ?,
?
n?
1/2
?[??1
i,i ]
,
(11)
where, for ? ? [0, 1] and u ? R+ , the function G(?, u) is defined as follows:
?
?
G(?, u) = 2 ? ?(??1 (1 ? ) + u) ? ?(??1 (1 ? ) ? u) .
2
2
It is easy to see that, for any ? > 0, u 7? G(?, u) is continuous and monotone increasing. Moreover,
G(?, 0) = ? which is the trivial power obtained by randomly rejecting H0,i with probability ?. As
? deviates from zero, we obtain nontrivial
to achieve a specific power
? power. Notice that in order
?1
? > ?, our scheme requires ? = O(?/ n), since ??1
) ? (?min (?))?1 = O(1).
i,i ? ?max (?
3.2.1
Minimax optimality
The authors of [10] prove an upper bound for the minimax power of tests with a given significance
level ?, under the Gaussian random design models (see Theorem 2.6 therein). This bound is obtained
by considering an oracle test that knows all the active parameters except i, i.e., S\{i}. To state the
bound formally, for a set S ? [p] and i ? S c , define ?i|S ? ?i,i ? ?i,S (?S,S )?1 ?S,i , and let
n
o
??,s0 ? min ?i|S : S ? [p]\{i}, |S| < s0 .
i?[p],S
?
In asymptotic regime and under our sparsity assumption s0 = o( n/ log p), the bound of [10]
simplifies to
lim
n??
1 ? ?iopt (?; ?)
? 1,
G(?, ?/?eff )
?eff = ?
?
,
n ??,s0
(12)
Using the bound of (12) and specializing the result of Theorem 3.3 to Gaussian design X, we obtain
that our scheme achieves a near optimal minimax power for a broad class of covariance matrices.
We can compare our test to the optimal test by computing how much ? must be increased in order to
achieve the minimax optimal power. It follows from the above that ? must be increased to ?
?, with
the two differing by a factor:
q
q
p
?
?/? = ??1
?
?
??1
?max (?)/?min (?) ,
?,s
0
ii
i,i ?i,i ?
?1
since ??1
, and ?i|S ? ?i,i ? ?max (?) due to ?S,S 0.
ii ? (?min (?))
4
General regularized maximum likelihood
In this section, we generalize our results beyond the linear regression model to general regularized
maximum likelihood. Here, we only describe the de-biasing method. Formal guarantees can be
obtained under suitable restricted strong convexity assumptions [20] and will be the object of a
forthcoming publication.
For univariate Y , and vector X ? Rp , we let {f? (Y |X)}??Rp be a family of conditional probability
densities parameterized by ?, that are absolutely continuous with respect to a common measure
?(dy), and suppose that the gradient ?? f? (Y |X) exists and is square integrable.
As in for linear regression, we assume that the data is given by n i.i.d. pairs (X1 , Y1 ), . . . (Xn , Yn ),
where conditional on Xi , the response variable Yi is distributed as
Yi ? f?0 ( ? |Xi ) .
6
for some parameter vector ?0 ? Rp . Let Li (?) = ? log f? (Yi |Xi ) be the normalized
negative
Pn
log-likelihood corresponding to the observed pair (Yi , Xi ), and define L(?) = n1 i=1 Li (?) . We
consider the following regularized estimator:
(13)
?b ? arg minp L(?) + ?R(?) ,
??R
where ? is a regularization parameter and R : Rp ? R+ is a norm.
b Let Ii (?) be the Fisher information of f? (Y |Xi ), defined as
We next generalize the definition of ?.
h
i
h
T i
Ii (?) ? E ?? log f? (Y |Xi ) ?? log f? (Y |Xi ) Xi = ?E ?2? log f (Y |Xi , ?) Xi ,
where the second identity holds under suitable regularity conditions [13], and ?2? denotes the Hesb ? Rp?p as follows:
sian operator. We assume E[Ii (?)] 0 define ?
n
X
b .
b? 1
?
Ii (?)
n i=1
(14)
b Finally, the de-biased estimator ?bu is defined by ?bu ?
b depends on ?.
Note that (in general) ?
b
b
? ? M ?? L(?) , with M given again by the solution of the convex program (4), and the definition of
b provided here. Notice that this construction is analogous to the one in [25] (although the present
?
setting is somewhat more general) with the crucial difference of the construction of M .
b
A a simple heuristic derivation of this method is the following. By Taylor expansion of L(?)
u
2
2
b
around ?0 we get ?b ? ?b ? M ?? L(?0 ) ? M ?? L(?0 )(?b ? ?0 ) . Approximating ?? L(?0 ) ? ?
u
b
(which amounts to taking expectation with respect to the response variables yi ), we get ? ? ?0 ?
b ? I](?b ? ?b0 ). Conditionally on {Xi }1?i?n , the first term has zero expectation
?M ?? L(?0 ) ? [M ?
b ]. Further, by central limit theorem, its low-dimensional marginals are apand covariance [M ?M
b ? I](?b ? ?b0 ) can be bounded as in the linear regression
proximately Gaussian. The bias term ?[M ?
b ? I|? ? ?.
case, building on the fact that M is chosen such that |M ?
Similar to the linear case, an asymptotic two-sided confidence interval for ?0,i (with significance ?)
is given by Ii = [?biu ? ?(?, n), ?biu + ?(?, n)], where
b T ]1/2 .
?(?, n) = ??1 (1 ? ?/2)n?1/2 [M ?M
i,i
Moreover, an asymptotically valid p-value Pi for testing null hypothesis H0,i is constructed as:
? bu
n|?i |
.
Pi = 2 1 ? ?
b T ]1/2
[M ?M
i,i
In the next section, we shall apply the general approach presented here to L1 -regularized logistic
regression. In this case, the binary response Yi ? {0, 1} is distributed as Yi ? f?0 ( ? |Xi ) where
f?0 (1|x) = (1 + e?hx,?0 i )?1 and f?0 (0|x) = (1 + ehx,?0 i )?1 . It is easy to see that in this case
b i i ?1
b = qbi (1 ? qbi )Xi X T , with qbi = (1 + e?h?,X
Ii (?)
) , and thus
i
n
X
b= 1
?
qbi (1 ? qbi )Xi XiT .
n i=1
5
Diabetes data example
We consider the problem of estimating relevant attributes in predicting type-2 diabetes. We evaluate
the performance of our hypothesis testing procedure on the Practice Fusion Diabetes dataset [1].
This dataset contains de-identified medical records of 10000 patients, including information on diagnoses, medications, lab results, allergies, immunizations, and vital signs. From this dataset, we extract p numerical attributes resulting in a sparse design matrix Xtot ? Rntot ?p , with ntot = 10000,
7
0.3
~
ZSc
~
ZS
N(0, 1)
0.1
0.2
Density
2
0
-2
0.0
-4
Sample Quantiles of Z
4
0.4
~
Histograms of Z
-3
-2
-1
0
1
2
3
-10
-5
0
5
10
Standard normal quantiles
(a)
(b)
Q-Q plot of Z
? for one realization.
Normalized histograms of Z
Figure 1: Q-Q plot of Z and normalized histograms of Z?S (in red) and Z?S c (in blue) for one realization. No fitting of the Gaussian mean and variance was done in panel (b).
and p = 805 (only 5.9% entries of Xtot are non-zero). Next, we standardize the columns of X to
have mean 0 and variance 1. The attributes consist of: (i)Transcript records: year of birth, gender
and BMI; (ii)Diagnoses informations: 80 binary attributes corresponding to different ICD-9 codes.
(iii)Medications: 80 binary attributes indicating the use of different medications. (iv) Lab results:
For 70 lab test observations, we include attributes indicating patients tested, abnormality flags, and
the observed values. We also bin the observed values into 10 quantiles and make 10 binary attributes
indicating the bin of the corresponding observed value.
We consider logistic model as described in the previous section with a binary response identifying
the patients diagnosed with type-2 diabetes. For the sake of performance evaluation, we need to
know the true significant attributes. Letting L(?) be the logistic loss corresponding to the design
Xtot and response vector Y ? Rntot , we take ?0 as the minimizer of L(?). Notice that here, we are
in the low dimensional regime (ntot > p) and no regularization is needed.
Next, we take random subsamples of size n = 500 from the patients, and examine the performance
of our testing procedure. The experiment is done using glmnet-package in R that fits the entire path
of the regularized logistic estimator. We then choose the value of ? that yields maximum AUC (area
under ROC curve), approximated by a 5-fold cross validation.
Results: Type I errors and powers of our decision rule (9) are computed by comparing to ?0 . The
average error and power (over 20 random subsamples) and significance level ? = 0.05 are respec?
b ]1/2 .
tively, 0.0319 and 0.818. Let Z = (zi )pi=1 denote the vector with zi ? n(?biu ? ?0,i )/[M ?M
i,i
In Fig. 1(a), sample quantiles of Z are depicted versus the quantiles of a standard normal distribution. The plot clearly corroborates our theoretical result regarding the limiting distribution of Z.
In order to build further intuition about the proposed p-values, let Z? = (?
zi )pi=1 be the vector with
? bu
b ]1/2 . In Fig. 1(b), we plot the normalized histograms of Z?S (in red) and Z?S c (in
z?i ? n?i /[M ?M
i,i
blue). As the plot showcases, Z?S c has roughly standard normal distribution, and the entries of Z?S
appear as distinguishable spikes. The entries of Z?S with larger magnitudes are easier to be marked
off from the normal distribution tail.
References
[1] Practice Fusion Diabetes Classification. http://www.kaggle.com/c/pf2012-diabetes, 2012. Kaggle
competition dataset.
8
[2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Amer. J.
of Mathematics, 37:1705?1732, 2009.
[3] P. B?uhlmann. Statistical significance in high-dimensional linear models. arXiv:1202.1377, 2012.
[4] P. B?uhlmann and S. van de Geer. Statistics for high-dimensional data. Springer-Verlag, 2011.
[5] E. Cand`es and Y. Plan. Near-ideal model selection by `1 minimization. The Annals of Statistics,
37(5A):2145?2177, 2009.
[6] E. J. Cand?es and T. Tao. Decoding by linear programming. IEEE Trans. on Inform. Theory, 51:4203?
4215, 2005.
[7] S. Chen and D. Donoho. Examples of basis pursuit. In Proceedings of Wavelet Applications in Signal and
Image Processing III, San Diego, CA, 1995.
[8] E. Greenshtein and Y. Ritov. Persistence in high-dimensional predictor selection and the virtue of overparametrization. Bernoulli, 10:971?988, 2004.
[9] A. Javanmard and A. Montanari. Confidence Intervals and Hypothesis Testing for High-Dimensional
Regression. arXiv:1306.3171, 2013.
[10] A. Javanmard and A. Montanari. Hypothesis testing in high-dimensional regression under the gaussian
random design model: Asymptotic theory. arXiv:1301.4240, 2013.
[11] A. Javanmard and A. Montanari. Nearly Optimal Sample Size in Hypothesis Testing for HighDimensional Regression. arXiv:1311.0274, 2013.
[12] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer,
42(8):30?37, August 2009.
[13] E. Lehmann and G. Casella. Theory of point estimation. Springer, 2 edition, 1998.
[14] E. Lehmann and J. Romano. Testing statistical hypotheses. Springer, 2005.
[15] R. Lockhart, J. Taylor, R. Tibshirani, and R. Tibshirani. A significance test for the lasso. arXiv preprint
arXiv:1301.7161, 2013.
[16] M. Lustig, D. Donoho, J. Santos, and J. Pauly. Compressed sensing mri. IEEE Signal Processing Magazine, 25:72?82, 2008.
[17] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the lasso.
Ann. Statist., 34:1436?1462, 2006.
[18] N. Meinshausen and P. B?uhlmann. Stability selection. J. R. Statist. Soc. B, 72:417?473, 2010.
[19] J. Minnier, L. Tian, and T. Cai. A perturbation method for inference on regularized regression estimates.
Journal of the American Statistical Association, 106(496), 2011.
[20] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538?557, 2012.
[21] J. Peng, J. Zhu, A. Bergamaschi, W. Han, D.-Y. Noh, J. R. Pollack, and P. Wang. Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast
cancer. The Annals of Applied Statistics, 4(1):53?77, 2010.
[22] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE Transactions
on Information Theory, 59(6):3434?3447, 2013.
[23] T. Sun and C.-H. Zhang. Scaled sparse linear regression. Biometrika, 99(4):879?898, 2012.
[24] R. Tibshirani. Regression shrinkage and selection with the Lasso. J. Royal. Statist. Soc B, 58:267?288,
1996.
[25] S. van de Geer, P. B?uhlmann, and Y. Ritov. On asymptotically optimal confidence regions and tests for
high-dimensional models. arXiv:1303.0518, 2013.
[26] A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
[27] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained
quadratic programming. IEEE Trans. on Inform. Theory, 55:2183?2202, 2009.
[28] L. Wasserman. All of statistics: a concise course in statistical inference. Springer Verlag, 2004.
[29] L. Wasserman and K. Roeder. High dimensional variable selection. Annals of statistics, 37(5A):2178,
2009.
[30] C.-H. Zhang and S. Zhang. Confidence Intervals for Low-Dimensional Parameters in High-Dimensional
Linear Models. arXiv:1110.2563, 2011.
[31] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning Research,
7:2541?2563, 2006.
9
| 4931 |@word mri:1 version:2 norm:9 suitably:1 integrative:1 bn:13 covariance:11 decomposition:1 decorrelate:1 concise:1 necessity:2 contains:2 denoting:1 current:1 comparing:1 com:1 assigning:1 must:2 realize:1 numerical:2 plot:5 resampling:1 selected:1 vanishing:1 record:2 rntot:2 characterization:3 detecting:1 provides:1 zhang:5 along:1 constructed:1 direct:1 consists:1 prove:3 shorthand:1 fitting:2 peng:1 javanmard:5 indeed:1 roughly:1 cand:2 examine:1 growing:1 andrea:1 actual:1 considering:1 increasing:1 abound:1 provided:2 becomes:1 bounded:2 notation:1 moreover:2 estimating:1 panel:1 null:3 easiest:1 santos:1 z:1 developed:1 differing:1 finding:1 unified:1 guarantee:2 ti:4 tackle:1 biometrika:1 scaled:2 control:3 medical:1 omit:1 yn:3 appear:1 positive:2 understood:1 limit:1 consequence:3 path:2 approximately:1 therein:1 studied:1 dantzig:1 meinshausen:2 challenging:1 qii:4 factorization:1 tian:1 testing:14 practice:3 procedure:13 area:3 empirical:1 bell:1 reject:1 persistence:1 confidence:19 word:1 get:2 close:1 selection:8 operator:2 context:1 impossible:2 restriction:1 www:1 convex:5 decomposable:1 identifying:2 recovery:1 wasserman:2 estimator:22 rule:2 deriving:1 population:2 classic:1 stability:1 coordinate:1 analogous:2 limiting:2 resp:1 construction:4 suppose:2 annals:3 diego:1 exact:4 programming:2 magazine:1 us:1 hypothesis:14 diabetes:7 standardize:1 satisfying:1 particularly:1 approximated:1 showcase:1 observed:4 role:1 preprint:1 wang:1 region:1 sun:1 intuition:2 convexity:1 covariates:2 solving:2 bergamaschi:1 basis:1 joint:1 k0:2 derivation:1 effective:1 describe:1 choosing:1 h0:8 birth:1 whose:1 heuristic:2 stanford:6 widely:2 larger:3 otherwise:1 compressed:1 compensates:1 statistic:10 vaart:1 noisy:1 ip:1 subsamples:2 advantage:1 eigenvalue:1 sequence:1 cai:1 propose:2 reconstruction:2 product:1 relevant:1 realization:3 rapidly:1 achieve:2 competition:1 ky:2 x1t:2 regularity:1 object:1 derive:1 develop:2 b0:2 transcript:1 eq:4 strong:1 soc:2 implies:2 come:2 quantify:1 met:1 direction:1 attribute:8 eff:2 bin:3 require:5 hx:1 generalization:3 preliminary:1 hold:7 sufficiently:1 copious:1 around:1 normal:5 claim:1 achieves:3 bickel:1 purpose:1 estimation:6 applicable:1 uhlmann:6 establishes:1 minimization:2 clearly:1 gaussian:14 always:1 aim:1 rather:1 pn:1 zhou:1 shrinkage:1 publication:2 corollary:2 derived:1 focus:1 xit:1 rank:1 likelihood:4 bernoulli:1 contrast:1 rigorous:1 medication:3 inference:3 roeder:1 entire:1 accept:1 interested:1 tao:1 compatibility:6 arg:3 overall:1 noh:1 classification:1 denoted:3 priori:2 plan:1 constrained:1 special:1 field:1 construct:6 represents:1 broad:3 yu:2 nearly:5 future:1 modern:1 randomly:2 individual:1 replaced:1 ourselves:1 n1:1 detection:1 evaluation:1 withp:1 yielding:1 regularizers:1 accurate:1 iv:2 taylor:2 theoretical:3 pollack:1 instance:2 column:4 earlier:4 increased:2 ordinary:1 deviation:2 entry:5 predictor:2 successful:2 characterize:1 supx:1 density:2 negahban:1 bu:17 off:1 decoding:1 concrete:1 again:1 central:1 containing:1 choose:2 resort:1 american:1 zhao:1 li:2 supp:3 de:13 gaussianity:1 coefficient:5 ntot:2 kvkp:1 mp:2 vi:1 depends:1 try:1 view:1 lab:3 sup:4 red:2 start:1 collaborative:1 minimize:3 square:2 formed:1 variance:6 likewise:1 yield:2 nonsingular:1 generalize:2 rejecting:2 simultaneous:1 inform:2 casella:1 whenever:1 definition:4 streamlined:1 volinsky:1 associated:2 proof:3 mi:1 newly:1 dataset:4 lim:3 qbi:5 improves:1 dt:1 response:6 wherein:1 entrywise:1 ritov:4 done:2 amer:1 diagnosed:1 furthermore:1 just:1 hand:2 ei:1 logistic:5 quality:1 building:1 k22:3 validity:1 unbiased:3 y2:1 true:3 normalized:4 hence:4 regularization:3 nonzero:2 conditionally:1 indistinguishable:1 auc:1 adelj:1 complete:2 l1:1 image:1 common:1 mt:2 tively:1 volume:1 anisotropic:1 tail:1 association:1 m1:2 elementwise:1 marginals:1 measurement:2 refer:3 significant:1 cambridge:1 ai:2 consistency:4 kaggle:2 mathematics:1 han:1 multivariate:1 recent:1 perspective:1 optimizing:2 irrelevant:1 certain:4 verlag:2 rep:1 arbitrarily:2 binary:5 yi:9 der:1 integrable:1 pauly:1 minimum:2 relaxed:1 somewhat:1 recognized:1 signal:3 ii:13 semi:1 technical:2 cross:1 long:1 ravikumar:1 promotes:1 specializing:1 prediction:2 converging:1 regression:19 basic:1 whitened:1 essentially:1 expectation:2 patient:4 breast:1 arxiv:8 histogram:4 achieved:2 c1:1 addition:2 remarkably:1 interval:19 addressed:1 singular:2 crucial:1 biased:8 subject:1 induced:1 deficient:1 integer:2 near:2 ideal:2 abnormality:1 iii:3 easy:2 vital:1 fit:1 zi:3 forthcoming:2 lasso:14 identified:1 idea:1 simplifies:1 knowing:1 tradeoff:1 regarding:1 icd:1 adel:1 penalty:2 romano:1 remark:2 generally:1 clear:1 amount:1 tsybakov:1 ten:1 statist:3 http:1 exist:1 zj:1 notice:4 sign:1 per:3 tibshirani:3 blue:2 diagnosis:2 write:1 shall:1 key:1 salient:1 threshold:1 lustig:1 clarity:1 montanar:1 vast:1 asymptotically:4 subgradient:2 monotone:1 graph:1 year:2 inverse:1 parameterized:1 uncertainty:3 package:1 lehmann:2 master:1 throughout:2 reader:2 family:1 parsimonious:1 decision:2 dy:1 scaling:2 submatrix:2 bound:7 played:1 koren:1 fold:1 quadratic:1 ehx:1 oracle:1 nontrivial:2 constraint:2 precisely:1 x2:1 sake:2 dence:1 mestimators:1 argument:1 extremely:1 optimality:3 min:7 increasingly:1 wi:2 deferring:2 kakp:1 restricted:3 sided:2 computationally:1 turn:1 detectable:1 needed:2 know:2 letting:4 tractable:1 end:1 available:2 pursuit:1 apply:1 generic:1 appropriate:1 alternative:2 rp:11 denotes:4 assumes:1 cf:1 include:1 rudelson:1 k1:4 build:1 approximating:1 classical:4 objective:2 allergy:1 already:1 added:1 spike:1 parametric:1 dependence:1 gradient:1 trivial:1 code:1 index:3 biu:5 xik:1 negative:2 xnt:1 design:14 unknown:1 upper:1 recommender:1 observation:2 behave:1 supporting:1 situation:1 y1:3 rn:1 kvk0:1 perturbation:1 arbitrary:1 august:1 sharp:1 introduced:1 pair:3 namely:2 required:1 greenshtein:1 s20:2 trans:2 beyond:2 suggested:1 below:1 biasing:2 sparsity:5 challenge:1 regime:2 immunization:1 program:4 royal:1 max:6 explanation:1 including:1 wainwright:2 power:14 suitable:3 event:3 natural:1 rely:1 regularized:9 predicting:1 sian:1 zhu:1 minimax:5 scheme:2 unconditionally:1 extract:1 genomics:2 deviate:1 literature:2 asymptotic:7 loss:1 bols:1 filtering:1 proportional:1 analogy:1 proven:1 remarkable:1 versus:2 validation:1 consistent:4 s0:19 minp:2 uncorrelated:1 pi:8 row:5 cancer:1 course:1 overparametrization:1 zsc:1 last:1 aij:1 side:1 bias:3 allow:1 formal:1 characterizing:1 taking:1 sparse:9 departing:1 van:4 distributed:2 overcome:1 depth:1 xn:2 valid:1 curve:1 concretely:1 commonly:1 author:2 made:1 san:1 far:1 transaction:1 selector:1 active:2 corroborates:1 xi:16 continuous:2 table:2 nature:1 ca:3 expansion:1 lockhart:1 investigated:1 constructing:5 significance:12 main:1 montanari:5 bmi:1 noise:4 edition:1 x1:4 fig:2 quantiles:5 roc:1 sub:4 position:1 explicit:2 exponential:1 wavelet:1 formula:1 theorem:12 xt:12 specific:2 k21:1 maxi:1 sensing:1 virtue:1 evidence:1 fusion:2 exists:2 consist:1 restricting:1 adding:1 false:2 magnitude:1 chen:1 easier:1 depicted:1 distinguishable:1 univariate:1 glmnet:1 kxk:5 scalar:1 applies:2 springer:4 gender:1 radically:1 minimizer:2 satisfies:2 cdf:1 conditional:2 goal:2 presentation:1 identity:1 quantifying:1 marked:1 donoho:2 appreciable:1 ann:1 price:2 fisher:1 feasible:1 respec:1 except:1 uniformly:1 flag:1 lemma:2 geer:3 accepted:2 e:2 indicating:3 select:1 formally:2 highdimensional:1 support:6 latter:1 absolutely:1 khx:1 evaluate:1 tested:1 |
4,345 | 4,932 | Compressive Feature Learning
Robert West
Department of Computer Science
Stanford University
[email protected]
Hristo S. Paskov
Department of Computer Science
Stanford University
[email protected]
Trevor J. Hastie
Department of Statistics
Stanford University
[email protected]
John C. Mitchell
Department of Computer Science
Stanford University
[email protected]
Abstract
This paper addresses the problem of unsupervised feature learning for text data.
Our method is grounded in the principle of minimum description length and uses
a dictionary-based compression scheme to extract a succinct feature set. Specifically, our method finds a set of word k-grams that minimizes the cost of reconstructing the text losslessly. We formulate document compression as a binary optimization task and show how to solve it approximately via a sequence of
reweighted linear programs that are efficient to solve and parallelizable. As our
method is unsupervised, features may be extracted once and subsequently used in
a variety of tasks. We demonstrate the performance of these features over a range
of scenarios including unsupervised exploratory analysis and supervised text categorization. Our compressed feature space is two orders of magnitude smaller than
the full k-gram space and matches the text categorization accuracy achieved in the
full feature space. This dimensionality reduction not only results in faster training
times, but it can also help elucidate structure in unsupervised learning tasks and
reduce the amount of training data necessary for supervised learning.
1
Introduction
Machine learning algorithms rely critically on the features used to represent data; the feature set
provides the primary interface through which an algorithm can reason about the data at hand. A typical pitfall for many learning problems is that there are too many potential features to choose from.
Intelligent subselection is essential in these scenarios because it can discard noise from irrelevant
features, thereby requiring fewer training examples and preventing overfitting. Computationally,
a smaller feature set is almost always advantageous as it requires less time and space to train the
algorithm and make inferences [10, 9].
Various heuristics have been proposed for feature selection, one class of which work by evaluating each feature separately with respect to its discriminative power. Some examples are document
frequency, chi-square value, information gain, and mutual information [26, 9]. More sophisticated
methods attempt to achieve feature sparsity by optimizing objective functions containing an L1 regularization penalty [25, 27].
Unsupervised feature selection methods [19, 18, 29, 13] are particularly attractive. First, they do not
require labeled examples, which are often expensive to obtain (e.g., when humans have to provide
them) or might not be available in advance (e.g., in text classification, the topic to be retrieved might
be defined only at some later point). Second, they can be run a single time in an offline preprocessing
1
step, producing a reduced feature space that allows for subsequent rapid experimentation. Finally,
a good data representation obtained in an unsupervised way captures inherent structure and can be
used in a variety of machine learning tasks such as clustering, classification, or ranking.
In this work we present a novel unsupervised method for feature selection for text data based on ideas
from data compression and formulated as an optimization problem. As the universe of potential
features, we consider the set of all word k-grams.1 The basic intuition is that substrings appearing
frequently in a corpus represent a recurring theme in some of the documents, and hence pertain
to class representation. However, it is not immediately clear how to implement this intuition. For
instance, consider a corpus of NIPS papers. The bigram ?supervised learning? will appear often, but
so will the constituent unigrams ?supervised? and ?learning?. So shall we use the bigram, the two
separate unigrams, or a combination, as features?
Our solution invokes the principle of minimum description length (MDL) [23]: First, we compress
the corpus using a dictionary-based lossless compression method. Then, the substrings that are used
to reconstruct each document serve as the feature set. We formulate the compression task as a numerical optimization problem. The problem is non-convex, but we develop an efficient approximate
algorithm that is linear in the number of words in the corpus and highly parallelizable. In the example, the bigram ?supervised learning? would appear often enough to be added to the dictionary;
?supervised? and ?learning? would also be chosen as features if they appear separately in combinations other than ?supervised learning? (because the compression paradigm we choose is lossless).
We apply our method to two datasets and compare it to a canonical bag-of-k-grams representation.
Our method reduces the feature set size by two orders of magnitude without incurring a loss of
performance on several text categorization tasks. Moreover, it expedites training times and requires
significantly less labeled training data on some text categorization tasks.
2
Compression and Machine Learning
Our work draws on a deep connection between data compression and machine learning, exemplified early on by the celebrated MDL principle [23]. More recently, researchers have experimented
with off-the-shelf compression algorithms as machine learning subroutines. Instances are Frank et
al.?s [7] compression-based approach to text categorization, as well as compression-based distance
measures, where the basic intuition is that, if two texts x and y are very similar, then the compressed
version of their concatenation xy should not be much longer than the compressed version of either
x or y separately. Such approaches have been shown to work well on a variety of tasks such as
language clustering [1], authorship attribution [1], time-series clustering [6, 11], anomaly detection
[11], and spam filtering [3].
Distance-based approaches are akin to kernel methods, and thus suffer from the problem that constructing the full kernel matrix for large datasets might be infeasible. Furthermore, Frank et al.
[7] deplore that ?it is hard to see how efficient feature selection could be incorporated? into the
compression algorithm. But Sculley and Brodley [24] show that many compression-based distance
measures can be interpreted as operating in an implicit high-dimensional feature space, spanned by
the dictionary elements found during compression. We build on this observation to address Frank et
al.?s above-cited concern about the impossibility of feature selection for compression-based methods. Instead of using an off-the-shelf compression algorithm as a black-box kernel operating in
an implicit high-dimensional feature space, we develop an optimization-based compression scheme
whose explicit job it is to perform feature selection.
It is illuminating to discuss a related approach suggested (as future work) by Sculley and Brodley
[24], namely ?to store substrings found by Lempel?Ziv schemes as explicit features?. This simplistic
approach suffers from a serious flaw that our method overcomes. Imagine we want to extract features
from an entire corpus. We would proceed by concatenating all documents in the corpus into a single
large document D, which we would compress using a Lempel?Ziv algorithm. The problem is that
the extracted substrings are dependent on the order in which we concatenate the documents to form
the input D. For the sake of concreteness, consider LZ77 [28], a prominent member of the Lempel?
Ziv family (but the argument applies equally to most standard compression algorithms). Starting
from the current cursor position, LZ77 scans D from left to right, consuming characters until it
1 In the remainder of this paper, the term ?k-grams? includes sequences of up to (rather than exactly) k words.
2
Document
Min.?dictionary?cost
Min.?combined?cost
Min.?pointer?cost
m?a?n?a?m?a?n?a
m?a?n?a?m?a?n?a
m?a?n?a?m?a?n?a
m?????a?????n
m?a?n?a
m?a?n?a?m?a?n?a
3?+?(0???8)?=?3
4?+?(1???2)?=?6
8?+?(8???1)?=?16
Pointers
Dictionary
Cost
Figure 1: Toy example of our optimization problem for text compression. Three different solutions
shown for representing the 8-word document D = manamana in terms of dictionary and pointers.
Dictionary cost: number of characters in dictionary. Pointer cost: ? ? number of pointers. Costs
given as dictionary cost + pointer cost. Left: dictionary cost only (? = 0). Right: expensive pointer
cost (? = 8). Center: balance of dictionary and pointer costs (? = 1).
has found the longest prefix matching a previously seen substring. It then outputs a pointer to that
previous instance?we interpret this substring as a feature?and continues with the remaining input
string (if no prefix matches, the single next character is output). This approach produces different
feature sets depending on the order in which documents are concatenated. Even in small instances
such as the 3-document collection {D1 = abcd, D2 = ceab, D3 = bce}, the order (D1 , D2 , D3 ) yields
the feature set {ab, bc}, whereas (D2 , D3 , D1 ) results in {ce, ab} (plus, trivially, the set of all single
characters).
As we will demonstrate in our experiments section, this instability has a real impact on performance
and is therefore undesirable. Our approach, like LZ77, seeks common substrings. However, our
formulation is not affected by the concatenation order of corpus documents and does not suffer from
LZ77?s instability issues.
3
Compressive Feature Learning
The MDL principle implies that a good feature representation for a document D = x1 x2 . . . xn of
n words minimizes some description length of D. Our dictionary-based compression scheme accomplishes this by representing D as a dictionary?a subset of D?s substrings?and a sequence of
pointers indicating where copies of each dictionary element should be placed in order to fully reconstruct the document. The compressed representation is chosen so as to minimize the cost of storing
each dictionary element in plaintext as well as all necessary pointers. This scheme achieves a shorter
description length whenever it can reuse dictionary elements at different locations in D.
For a concrete example, see Fig. 1, which shows three ways of representing a document D in terms
of a dictionary and pointers. These representations are obtained by using the same pointer storage
cost ? for each pointer and varying ?. The two extreme solutions focus on minimizing either the
dictionary cost (? = 0) or the pointer cost (? = 8) solely, while the middle solution (? = 1) trades
off between minimizing a combination of the two. We are particularly interested in this tradeoff:
when all pointers have the same cost, the dictionary and pointer costs pull the solution in opposite
directions. Varying ? allows us to ?interpolate? between the two extremes of minimum dictionary
cost and minimum pointer cost. In other words, ? can be interpreted as tracing out a regularization
path that allows a more flexible representation of D.
To formalize our compression criterion, let S = { xi . . . xi+t?1 | 1 ? t ? k, 1 ? i ? n ? t + 1 } be the
set of all unique k-grams in D, and P = { (s, l) | s = xl . . . xl+|s|?1 } be the set of all m = |P| (potential)
pointers. Without loss of generality, we assume that P is an ordered set, i.e., each i ? {1, . . . , m}
corresponds to a unique pi ? P, and we define J(s) ? {1, . . . , m} to be the set of indices of all
pointers which share the same string s. Given a binary vector w ? {0, 1}m , w reconstructs word x j if
for some wi = 1 the corresponding pointer pi = (s, l) satisfies l ? j < l + |s|. This notation uses wi
to indicate whether pointer pi should be used to reconstruct (part of) D by pasting a copy of string s
into location l. Finally, w reconstructs D if every x j is reconstructed by w.
3
Compressing D can be cast as a binary linear minimization problem over w; this bit vector tells us
which pointers to use in the compressed representation of D and it implicitly defines the dictionary
(a subset of S). In order to ensure that w reconstructs D, we require that Xw ? 1. Here X ? {0, 1}n?m
indicates which words each wi = 1 can reconstruct: the i-th column of X is zero everywhere except
for a contiguous sequence of ones corresponding to the words which wi = 1 reconstructs. Next, we
assume the pointer storage cost of setting wi = 1 is given by di ? 0 and that the cost of storing any
s ? S is c(s). Note that s must be stored in the dictionary if kwJ(s) k? = 1, i.e., some pointer using s
is used in the compression of D. Putting everything together, our lossless compression criterion is
minimize wT d +
w
X
subject to Xw ? 1, w ? {0, 1}m .
c(s)kwJ(s) k?
(1)
s?S
Finally, multiple documents can be compressed jointly by concatenating them in any order into a
large document and disallowing any pointers that span document boundaries. Since this objective is
invariant to the document concatenating order, it does not suffer from the same problems as LZ77
(cf. Section 2).
4
Optimization Algorithm
The binary constraint makes the problem in (1) non-convex. We solve it approximately via a series
of related convex problems P(1) , P(2) , . . . that converge to a good optimum. Each P(i) relaxes the
binary constraint to only require 0 ? w ? 1 and solves a weighted optimization problem
minimize wT d?(i) +
w
X
(i)
c(s)kDJ(s)J(s) wJ(s) k?
subject to Xw ? 1, 0 ? w ? 1.
(2)
s?S
Here, D(i) is an m ? m diagonal matrix of positive weights and d?(i)n= D(i) d for brevity.
We use an
o
(i+1)
(i)
(1)
?1
iterative reweighting scheme that uses D = I and D j j = max 1, (w j + )
, where w(i) is
the solution to P(i) . This scheme is inspired by the iterative reweighting method of Cand`es et al. [5]
for solving problems involving L0 regularization. At a high level, reweighting can be motivated by
noting that (2) recovers the correct binary solution if is sufficiently small and we use as weights
a nearly binary solution to (1). Since we do not know the correct weights, we estimate them from
our best guess to the solution of (1). In turn, D(i+1) punishes coefficients that were small in w(i) and,
taken together with the constraint Xw ? 1, pushes the solution to be binary.
ADMM Solution We demonstrate an efficient and parallel algorithm to solve (2) based on the
Alternating Directions Method of Multipliers (ADMM) [2]. Problem (2) is a linear program solvable
by a general purpose method in O(m3 ) time. However, if all potential dictionary elements are no
longer than k words in length, we can use problem structure to achieve a run time of O(k2 n) per step
of ADMM, i.e., linear in the document length. This is helpful because k is relatively small in most
scenarios: long k-grams tend to appear only once and are not helpful for compression. Moreover,
they are rarely used in NLP applications since the relevant signal is captured by smaller fragments.
ADMM is an optimization framework that operates by splitting a problem into two subproblems that
are individually easier to solve. It alternates solving the subproblems until they both agree on the
solution, at which point the full optimization problem has been solved. More formally, the optimum
of a convex function h(w) = f (w) + g(w) can be found by minimizing f (w) + g(z) subject to the
constraint that w = z. ADMM acccomplishes this by operating on the augmented Lagrangian
?
L? (w, z, y) = f (w) + g(z) + yT (w ? z) + kw ? zk22 .
2
(3)
It minimizes L? with respect to w and z while maximizing with respect to dual variable y ? Rm in
order to enforce the condition w = z. This minimization is accomplished by, at each step, solving
for w, then z, then updating y according to [2]. These steps are repeated until convergence.
4
Dropping the D(i) superscripts for legibility, we can exploit problem structure by splitting (2) into
f (w) = wT d? +
X
c(s)kDJ(s)J(s) wJ(s) k? + I+ (w),
g(z) = I+ (Xz ? 1)
(4)
s?S
where I+ (?) is 0 if its argument is non-negative and ? otherwise. We eliminated the w ? 1 constraint
because it is unnecessary?any optimal solution will automatically satisfy it.
Minimizing w The dual of this problem is a quadratic knapsack problem solvable in linear expected time [4], we provide a similar algorithm that solves the primal formulation. We solve for
each wJ(s) separately since the optimization is separable in each block of variables. It can be shown
[21] that wJ(s) = 0 if kD?1
q k ? c(s), where qJ(s) = max ?zJ(s) ? d?J(s) ? yJ(s) , 0 and the
J(s)J(s) J(s) 1
max operation is applied elementwise. Otherwise, wJ(s) is non-zero and the L? norm only affects
the maximal coordinates of DJ(s)J(s) wJ(s) . For simplicity of exposition, we assume that the coefficients of wJ(s) are sorted in decreasing order according to DJ(s)J(s) qJ(s) , i.e., [DJ(s)J(s) qJ(s) ] j ?
[DJ(s)J(s) qJ(s) ] j+1 . This is always possible by permuting coordinates. We show in [21] that, if
DJ(s)J(s) wJ(s) has r maximal coordinates, then
)
?1
v=1 DJ(s)v J(s)v qJ(s)v ? c(s)
.
Pr
?2
v=1 DJ(s)v J(s)v
Pr
(
wJ(s) j = D?1
J(s) j J(s) j
min DJ(s) j J(s) j qJ(s) j ,
(5)
We can find r by searching for the smallest value of r for which exactly r coefficients in DJ(s)J(s) wJ (s)
are maximal when determined by the formula above. As discussed in [21], an algorithm similar to
the linear-time median-finding algorithm can be used to determine wJ(s) in linear expected time.
Minimizing z Solving for z is tantamount to projecting a weighted combination of w and y onto
the polyhderon given by Xz ? 1 and is best solved by taking the dual. It can be shown [21] that the
dual optimization problem is
minimize
?
1 T
? H? ? ?T (?1 ? X(y + ?w))
2
subject to ? ? 0
(6)
where ? ? Rn+ is a dual variable enforcing Xz ? 1 and H = XX T . Strong duality obtains and z can
be recovered via z = ??1 (y + ?w + X T ?).
The matrix H has special structure when S is a set of k-grams no longer than k words. In this
case, [21] shows that H is a (k ? 1)?banded positive definite matrix so we can find its Cholesky
decomposition in O(k2 n). We then use an active-set Newton method [12] to solve (6) quickly in
approximately 5 Cholesky decompositions. A second important property of H is that, if N documents n1 , . . . , nN words long are compressed jointly and no k-gram spans two documents, then H is
block-diagonal with block i an ni ? ni (k ? 1)?banded matrix. This allows us to solve (6) separately
for each document. Since the majority of the time is spent solving for z, this property allows us to
parallelize the algorithm and speed it up considerably.
5
Experiments
20 Newsgroups Dataset The majority of our experiments are performed on the 20 Newsgroups
dataset [15, 22], a collection of about 19K messages approximately evenly split among 20 different
newsgroups. Since each newsgroup discusses a different topic, some more closely related than
others, we investigate our compressed features? ability to elucidate class structure in supervised and
unsupervised learning scenarios. We use the ?by-date? 60%/40% training/testing split described in
[22] for all classification tasks. This split makes our results comparable to the existing literature
and makes the task more difficult by removing correlations from messages that are responses to one
another.
5
Feature Extraction and Training We compute a bag-of-k-grams representation from a compressed document by counting the number of pointers that use each substring in the compressed
version of the document. This method retrieves the canonical bag-of-k-grams representation when
all pointers are used, i.e., w = 1. Our compression criterion therefore leads to a less redundant
representation. Note that we extract features for a document corpus by compressing all of its documents jointly and then splitting into testing and training sets. Since this process involves no label
information, it ensures that our estimate of testing error is unbiased.
All experiments were limited to using 5-grams as features, i.e., k = 5 for our compression algorithm.
Each substring?s dictionary cost was its word length and the pointer cost was uniformly set to 0 ?
? ? 5. We found that an overly large ? hurts accuracy more than an overly small value since the
former produces long, infrequent substrings, while the latter tends to a unigram representation. It is
also worthwhile to note that the storage cost (i.e., the value of the objective function) of the binary
solution was never more than 1.006 times the storage cost of the relaxed solution, indicating that we
consistently found a good local optimum.
Finally, all classification tasks use an Elastic-Net?regularized logistic regression classifier implemented by glmnet [8]. Since this regularizer is a mix of L1 and L2 penalties, it is useful for feature
selection but can also be used as a simple L2 ridge penalty. Before training, we normalize each document by its L1 norm and then normalize features by their standard deviation. We use this scheme
so as to prevent overly long documents from dominating the feature normalization.
LZ77 Order Sensitivity
0.08
Misclassification Error
LZ77 Comparison Our first experiment
demonstrates LZ77?s sensitivity to document
ordering on a simple binary classification task
of predicting whether a document is from the
alt.atheism (A) or comp.graphics (G) newsgroup. Features were computed by concatenating documents in different orders: (1) by
class, i.e., all documents in A before those in
G, or G before A; (2) randomly; (3) by alternating the class every other document. Fig. 5
shows the testing error compared to features
computed from our criterion. Error bars were
estimated by bootstrapping the testing set 100
times, and all regularization parameters were
chosen to minimize testing error while ? was
fixed at 0.03. As predicted in Section 2, document ordering has a marked impact on performance, with the by-class and random orders
performing significantly worse than the alternating ordering. Moreover, order invariance
and the ability to tune the pointer cost lets our
criterion select a better set of 5-grams.
0.06
0.04
0.02
0
AG
GA Rand Alt Ours
Figure 2: Misclassification error and standard
error bars when classifying alt.atheism (A) vs.
comp.graphics (G) from 20 Newsgroups. The
four leftmost results are on features from running
LZ77 on documents ordered by class (AG, GA),
randomly (Rand), or by alternating classes (Alt);
the rightmost is on our compressed features.
PCA Next, we investigate our features in a typical exploratory analysis scenario: a researcher
looking for interesting structure by plotting all pairs of the top 10 principal components of the
data. In particular, we verify PCA?s ability to recover binary class structure for the A and G newsgroups, as well as multiclass structure for the A, comp.sys.ibm.pc.hardware (PC), rec.motorcycles
(M), sci.space (S), and talk.politics.mideast (PM) newsgroups. Fig. 3 plots the pair of principal
components that best exemplifies class structure using (1) compressed features and (2) all 5-grams.
For the sake of fairness, the components were picked by training a logistic regression on every pair
of the top 10 principal components and selecting the pair with the lowest training error. In both the
binary and multiclass scenarios, PCA is inundated by millions of features when using all 5-grams
and cannot display good class structure. In contrast, compression reduces the feature set to tens of
thousands (by two orders of magnitude) and clearly shows class structure. The star pattern of the
five classes stands out even when class labels are hidden.
6
Figure 3: PCA plots for 20 Newsgroups. Left: alt.atheism (blue), comp.graphics (red). Right:
alt.atheism (blue), comp.sys.ibm.pc.hardware (green), rec.motorcycles (red), sci.space (cyan),
talk.politics.mideast (magenta). Top: compressed features (our method). Bottom: all 5-grams.
Table 1: Classification accuracy on the 20 Newsgroups and IMDb datasets
Method
Discriminative RBM [16]
Bag-of-Words SVM [14, 20]
Na??ve Bayes [17]
Word Vectors [20]
All 5-grams
Compressed (our method)
20 Newsgroups
76.2
80.8
81.8
?
82.8
83.0
IMDb
?
88.2
?
88.9
90.6
90.4
Classification Tasks Table 1 compares the performance of compressed features with all 5-grams
on two tasks: (1) categorizing posts from the 20 Newsgroups corpus into one of 20 classes; (2) categorizing movie reviews collected from IMDb [20] into one of two classes (there are 25,000 training
and 25,000 testing examples evenly split between the classes). For completeness, we include comparisons with previous work for 20 Newsgroups [16, 14, 17] and IMDb [20]. All regularization
parameters, including ?, were chosen through 10-fold cross validation on the training set. We also
did not L1 -normalize documents in the binary task because it was found to be counterproductive on
the training set.
Our classification performance is state of the art in both tasks, with the compressed and all-5-gram
features tied in performance. Since both datasets feature copious amounts of labeled data, we expect
the 5-gram features to do well because of the power of the Elastic-Net regularizer. What is remarkable is that the compression retains useful features without using any label information. There are
tens of millions of 5-grams, but compression reduces them to hundreds of thousands (by two orders
of magnitude). This has a particularly noticeable impact on training time for the 20 Newsgroups
dataset. Cross-validation takes 1 hour with compressed features and 8?16 hours for all 5-grams on
our reference computer depending on the sparsity of the resulting classifier.
Training-Set Size Our final experiment explores the impact of training-set size on binary-classification accuracy for the A vs. G and rec.sport.baseball (B) vs. rec.sport.hockey (H) newsgroups.
Fig. 4 plots testing error as the amount of training data varies, comparing compressed features to full
7
Error on A vs. G
Error on B vs. H
0.5
0.4
Compressed
All 5-grams L2
0.3
All 5-grams EN
Misclassification Error
Misclassification Error
0.5
0.2
0.1
0
0
20
40
60
80
Percent of Training Data
0.4
Compressed
All 5-grams L2
0.3
All 5-grams EN
0.2
0.1
0
100
(a)
0
20
40
60
80
Percent of Training Data
100
(b)
Figure 4: Classification accuracy as the training set size varies for two classification tasks
from 20 Newsgroups: (a) alt.atheism (A) vs. comp.graphics (G); (b) rec.sport.baseball (B) vs.
rec.sport.hockey (H). To demonstrate the effects of feature selection, L2 indicates L2 -regularization
while EN indicates elastic-net regularization.
5-grams; we explore the latter with and without feature selection enabled (i.e., Elastic Net vs. L2 regularizer). We resampled the training set 100 times for each training-set size and report the average
accuracy. All regularization parameters were chosen to minimize the testing error (so as to eliminate effects from imperfect tuning) and ? = 0.03 in both tasks. For the A?G task, the compressed
features require substantially less data than the full 5-grams to come close to their best testing error.
The B?H task is harder and all three classifiers benefit from more training data, although the gap
between compressed features and all 5-grams is widest when less than half of the training data is
available. In all cases, the compressed features outperform the full 5-grams, indicating that that
latter may benefit from even more training data. In future work it will be interesting to investigate
the efficacy of compressed features on more intelligent sampling schemes such as active learning.
6
Discussion
We develop a feature selection method for text based on lossless data compression. It is unsupervised
and can thus be run as a task-independent, one-off preprocessing step on a corpus. Our method
achieves state-of-the-art classification accuracy on two benchmark datasets despite selecting features
without any knowledge of the class labels. In experiments comparing it to a full 5-gram model, our
method reduces the feature-set size by two orders of magnitude and requires only a fraction of the
time to train a classifier. It selects a compact feature set that can require significantly less training
data and reveals unsupervised problem structure (e.g., when using PCA).
Our compression scheme is more robust and less arbitrary compared to a setup which uses off-theshelf compression algorithms to extract features from a document corpus. At the same time, our
method has increased flexibility since the target k-gram length is a tunable parameter. Importantly,
the algorithm we present is based on iterative reweighting and ADMM and is fast enough?linear
in the input size when k is fixed, and highly parallelizable?to allow for computing a regularization
path of features by varying the pointer cost. Thus, we may adapt the compression to the data at hand
and select features that better elucidate its structure.
Finally, even though we focus on text data in this paper, our method is applicable to any sequential
data where the sequence elements are drawn from a finite set (such as the universe of words in the
case of text data). In future work we plan to compress click stream data from users browsing the
Web. We also plan to experiment with approximate text representations obtained by making our
criterion lossy.
Acknowledgments
We would like to thank Andrej Krevl, Jure Leskovec, and Julian McAuley for their thoughtful discussions and help with our paper.
8
References
[1] D. Benedetto, E. Caglioti, and V. Loreto. Language trees and zipping. PRL, 88(4):048702, 2002.
[2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1?
122, 2011.
[3] A. Bratko, B. Filipi?c, G. V. Cormack, T. R. Lynam, and B. Zupan. Spam filtering using statistical data
compression models. JMLR, 7:2673?2698, 2006.
[4] P. Brucker. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163?
166, 1984.
[5] E. Cand`es, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted `1 minimization. J Fourier Analysis
and Applications, 14(5-6):877?905, 2008.
[6] R. Cilibrasi and P. M. Vit?anyi. Clustering by compression. TIT, 51(4):1523?1545, 2005.
[7] E. Frank, C. Chui, and I. Witten. Text categorization using compression models. Technical Report 00/02,
University of Waikato, Department of Computer Science, 2000.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. J Stat Softw, 33(1):1?22, 2010.
[9] E. Gabrilovich and S. Markovitch. Text categorization with many redundant features: Using aggressive
feature selection to make SVMs competitive with C4.5. In ICML, 2004.
[10] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 3:1157?1182, 2003.
[11] E. Keogh, S. Lonardi, and C. A. Ratanamahatana. Towards parameter-free data mining. In KDD, 2004.
[12] D. Kim, S. Sra, and I. S. Dhillon. Tackling box-constrained optimization via a new projected quasi-newton
approach. SIAM Journal on Scientific Computing, 32(6):3548?3563, 2010.
[13] V. Kuleshov. Fast algorithms for sparse principal component analysis based on Rayleigh quotient iteration.
In ICML, 2013.
[14] M. Lan, C. Tan, and H. Low. Proposing a new term weighting scheme for text categorization. In AAAI,
2006.
[15] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995.
[16] H. Larochelle and Y. Bengio. Classification using discriminative restricted Boltzmann machines. In
ICML, 2008.
[17] B. Li and C. Vogel. Improving multiclass text classification with error-correcting output coding and
sub-class partitions. In Can Conf Adv Art Int, 2010.
[18] H. Liu and L. Yu. Toward integrating feature selection algorithms for classification and clustering. TKDE,
17(4):491?502, 2005.
[19] T. Liu, S. Liu, Z. Chen, and W. Ma. An evaluation on feature selection for text clustering. In ICML, 2003.
[20] A. Maas, R. Daly, P. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment
analysis. In ACL, 2011.
[21] H. S. Paskov, R. West, J. C. Mitchell, and T. J. Hastie. Supplementary material for Compressive Feature
Learning, 2013.
[22] J. Rennie. 20 Newsgroups dataset, 2008. http://qwone.com/?jason/20Newsgroups (accessed
May 31, 2013).
[23] J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465?471, 1978.
[24] D. Sculley and C. E. Brodley. Compression and machine learning: A new perspective on feature space
vectors. In DCC, 2006.
[25] R. Tibshirani. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B, 58(1):267?288, 1996.
[26] Y. Yang and J. Pedersen. A comparative study on feature selection in text categorization. In ICML, 1997.
[27] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. In NIPS, 2004.
[28] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. TIT, 23(3):337?343, 1977.
[29] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. JCGS, 15(2):265?286, 2006.
9
| 4932 |@word middle:1 version:3 bigram:3 compression:38 advantageous:1 norm:3 d2:3 seek:1 decomposition:2 elisseeff:1 thereby:1 harder:1 mcauley:1 reduction:1 celebrated:1 series:2 fragment:1 selecting:2 punishes:1 efficacy:1 liu:3 document:38 bc:1 prefix:2 ours:1 rightmost:1 existing:1 current:1 recovered:1 comparing:2 com:1 lang:1 tackling:1 chu:1 must:1 john:1 subsequent:1 numerical:1 concatenate:1 kdd:1 partition:1 plot:3 v:8 half:1 fewer:1 guess:1 sys:2 pointer:31 provides:1 completeness:1 location:2 accessed:1 five:1 expected:2 rapid:1 cand:2 frequently:1 xz:3 brucker:1 chi:1 gabrilovich:1 inspired:1 decreasing:1 pitfall:1 automatically:1 xx:1 moreover:3 notation:1 lowest:1 what:1 interpreted:2 string:3 minimizes:3 substantially:1 proposing:1 compressive:3 finding:1 ag:2 bootstrapping:1 pasting:1 every:3 plaintext:1 exactly:2 classifier:4 k2:2 rm:1 demonstrates:1 appear:4 producing:1 positive:2 before:3 local:1 tends:1 despite:1 parallelize:1 solely:1 path:3 approximately:4 might:3 black:1 plus:1 acl:1 limited:1 range:1 unique:2 acknowledgment:1 yj:1 testing:10 block:3 implement:1 definite:1 universal:1 significantly:3 cormack:1 matching:1 boyd:2 word:18 integrating:1 onto:1 undesirable:1 selection:16 pertain:1 ga:2 storage:4 cannot:1 close:1 instability:2 andrej:1 expedites:1 lagrangian:1 center:1 yt:1 maximizing:1 attribution:1 starting:1 vit:1 convex:4 formulate:2 simplicity:1 splitting:3 immediately:1 correcting:1 importantly:1 spanned:1 pull:1 enabled:1 searching:1 exploratory:2 coordinate:4 hurt:1 markovitch:1 elucidate:3 imagine:1 infrequent:1 target:1 anomaly:1 user:1 tan:1 us:4 kuleshov:1 element:6 trend:1 expensive:2 particularly:3 updating:1 continues:1 rec:6 labeled:3 bottom:1 solved:2 capture:1 thousand:2 wj:11 compressing:2 ensures:1 adv:1 ordering:3 trade:1 intuition:3 solving:5 tit:2 serve:1 baseball:2 imdb:4 various:1 retrieves:1 talk:2 regularizer:3 train:2 fast:2 tell:1 netnews:1 whose:1 heuristic:1 stanford:8 solve:8 dominating:1 rennie:1 supplementary:1 reconstruct:4 compressed:24 otherwise:2 ability:3 statistic:1 jointly:3 superscript:1 final:1 sequence:5 net:4 maximal:3 remainder:1 relevant:1 motorcycle:2 date:1 flexibility:1 achieve:2 loreto:1 description:5 normalize:3 constituent:1 convergence:1 optimum:3 anyi:1 produce:2 categorization:9 comparative:1 help:2 depending:2 develop:3 spent:1 stat:2 noticeable:1 job:1 strong:1 solves:2 implemented:1 c:3 predicted:1 implies:1 larochelle:1 indicate:1 come:1 direction:3 quotient:1 involves:1 closely:1 correct:2 filter:1 subsequently:1 human:1 material:1 everything:1 require:5 krevl:1 keogh:1 pham:1 sufficiently:1 copious:1 dictionary:25 early:1 achieves:2 smallest:1 theshelf:1 purpose:1 daly:1 applicable:1 bag:4 label:4 individually:1 weighted:2 minimization:3 clearly:1 always:2 rather:1 shelf:2 shrinkage:1 varying:3 categorizing:2 l0:1 focus:2 exemplifies:1 longest:1 consistently:1 potts:1 indicates:3 impossibility:1 contrast:1 kim:1 zk22:1 helpful:2 inference:1 flaw:1 dependent:1 nn:1 entire:1 eliminate:1 hidden:1 quasi:1 subroutine:1 interested:1 selects:1 issue:1 classification:15 flexible:1 ziv:4 dual:5 among:1 plan:2 art:3 special:1 constrained:1 mutual:1 once:2 never:1 extraction:1 ng:1 eliminated:1 sampling:1 softw:1 kw:1 yu:1 unsupervised:10 fairness:1 nearly:1 icml:6 future:3 others:1 report:2 intelligent:2 serious:1 inherent:1 randomly:2 bce:1 interpolate:1 ve:1 n1:1 attempt:1 ab:2 detection:1 friedman:1 message:2 highly:2 investigate:3 mining:1 evaluation:1 mdl:3 extreme:2 pc:3 primal:1 permuting:1 necessary:2 xy:1 shorter:1 tree:1 waikato:1 leskovec:1 instance:4 column:1 increased:1 modeling:1 contiguous:1 retains:1 cost:29 deviation:1 subset:2 hundred:1 too:1 graphic:4 stored:1 varies:2 considerably:1 combined:1 rosset:1 cited:1 explores:1 sensitivity:2 siam:1 off:5 together:2 quickly:1 concrete:1 na:1 aaai:1 containing:1 choose:2 reconstructs:4 huang:1 worse:1 conf:1 benedetto:1 toy:1 li:1 aggressive:1 potential:4 star:1 coding:1 includes:1 coefficient:3 int:1 satisfy:1 ranking:1 stream:1 later:1 performed:1 picked:1 unigrams:2 jason:1 red:2 competitive:1 recover:1 bayes:1 parallel:1 minimize:6 square:1 ni:2 accuracy:7 yield:1 counterproductive:1 pedersen:1 critically:1 substring:11 comp:6 researcher:2 banded:2 parallelizable:3 suffers:1 whenever:1 trevor:1 frequency:1 di:1 recovers:1 rbm:1 gain:1 dataset:4 tunable:1 mitchell:3 lonardi:1 knowledge:1 dimensionality:1 formalize:1 sophisticated:1 dcc:1 supervised:8 response:1 rand:2 formulation:2 box:2 though:1 generality:1 furthermore:1 implicit:2 until:3 correlation:1 hand:2 web:1 reweighting:4 defines:1 logistic:2 scientific:1 lossy:1 effect:2 requiring:1 multiplier:2 unbiased:1 verify:1 former:1 regularization:10 hence:1 qwone:1 alternating:5 dhillon:1 reweighted:2 attractive:1 during:1 authorship:1 criterion:6 leftmost:1 prominent:1 generalized:1 ridge:1 demonstrate:4 l1:4 interface:1 percent:2 novel:1 recently:1 parikh:1 common:1 legibility:1 witten:1 million:2 discussed:1 elementwise:1 interpret:1 tuning:1 trivially:1 pm:1 language:2 dj:9 longer:3 operating:3 retrieved:1 optimizing:1 irrelevant:1 perspective:1 discard:1 scenario:6 store:1 binary:14 accomplished:1 seen:1 minimum:4 captured:1 relaxed:1 accomplishes:1 converge:1 paradigm:1 determine:1 redundant:2 signal:1 shortest:1 full:8 multiple:1 mix:1 reduces:4 technical:1 match:2 faster:1 adapt:1 cross:2 long:4 post:1 equally:1 impact:4 involving:1 basic:2 simplistic:1 regression:3 enhancing:1 iteration:1 grounded:1 represent:2 kernel:3 normalization:1 achieved:1 whereas:1 want:1 separately:5 median:1 vogel:1 subject:4 tend:1 member:1 yang:1 noting:1 counting:1 split:4 enough:2 relaxes:1 prl:1 variety:3 affect:1 newsgroups:16 bengio:1 hastie:6 lasso:1 opposite:1 click:1 reduce:1 idea:1 imperfect:1 tradeoff:1 multiclass:3 qj:6 politics:2 whether:2 motivated:1 pca:5 reuse:1 akin:1 penalty:3 sentiment:1 suffer:3 proceed:1 deep:1 useful:2 clear:1 tune:1 amount:3 subselection:1 ten:2 hardware:2 svms:1 reduced:1 http:1 outperform:1 canonical:2 zj:1 estimated:1 overly:3 per:1 tibshirani:4 blue:2 tkde:1 shall:1 dropping:1 affected:1 putting:1 four:1 lan:1 rissanen:1 drawn:1 d3:3 prevent:1 ce:1 concreteness:1 fraction:1 run:3 everywhere:1 letter:1 almost:1 family:1 guyon:1 draw:1 comparable:1 bit:1 cyan:1 resampled:1 display:1 fold:1 quadratic:2 constraint:5 x2:1 sake:2 chui:1 fourier:1 speed:1 argument:2 min:4 span:2 performing:1 separable:1 relatively:1 department:5 according:2 alternate:1 combination:4 kd:1 smaller:3 reconstructing:1 character:4 wi:5 making:1 projecting:1 invariant:1 pr:2 restricted:1 taken:1 computationally:1 agree:1 previously:1 discus:2 turn:1 know:1 jcgs:1 available:2 operation:2 incurring:1 experimentation:1 apply:1 worthwhile:1 lempel:4 enforce:1 appearing:1 knapsack:2 compress:3 top:3 clustering:6 remaining:1 ensure:1 cf:1 nlp:1 running:1 include:1 newton:2 wakin:1 xw:4 exploit:1 invokes:1 concatenated:1 build:1 widest:1 objective:3 added:1 primary:1 diagonal:2 losslessly:1 distance:3 separate:1 thank:1 sci:2 concatenation:2 majority:2 evenly:2 topic:2 collected:1 reason:1 enforcing:1 toward:1 length:8 index:1 julian:1 balance:1 minimizing:5 abcd:1 thoughtful:1 difficult:1 setup:1 robert:1 frank:4 subproblems:2 negative:1 boltzmann:1 perform:1 observation:1 datasets:5 benchmark:1 finite:1 descent:1 incorporated:1 looking:1 rn:1 arbitrary:1 peleato:1 namely:1 cast:1 pair:4 eckstein:1 connection:1 c4:1 hour:2 nip:2 address:2 jure:1 recurring:1 suggested:1 bar:2 exemplified:1 pattern:1 sparsity:3 program:2 including:2 max:3 green:1 power:2 misclassification:4 rely:1 regularized:1 predicting:1 solvable:2 bratko:1 zhu:1 representing:3 scheme:11 movie:1 lossless:4 brodley:3 extract:4 text:21 review:1 literature:1 l2:7 tantamount:1 loss:2 fully:1 expect:1 interesting:2 filtering:2 remarkable:1 validation:2 foundation:1 illuminating:1 principle:4 plotting:1 storing:2 pi:3 share:1 classifying:1 ibm:2 maas:1 placed:1 copy:2 free:1 infeasible:1 offline:1 allow:1 taking:1 sparse:2 tracing:1 benefit:2 distributed:1 boundary:1 xn:1 gram:32 evaluating:1 stand:1 preventing:1 collection:2 preprocessing:2 projected:1 spam:2 soc:1 reconstructed:1 approximate:2 obtains:1 compact:1 implicitly:1 overcomes:1 overfitting:1 active:2 reveals:1 corpus:11 automatica:1 unnecessary:1 consuming:1 discriminative:3 xi:2 iterative:3 table:2 hockey:2 ratanamahatana:1 robust:1 elastic:4 sra:1 improving:1 zou:1 constructing:1 did:1 universe:2 noise:1 succinct:1 repeated:1 atheism:5 x1:1 augmented:1 fig:4 west:3 en:3 sub:1 theme:1 position:1 explicit:2 concatenating:4 xl:2 tied:1 jmlr:2 mideast:2 weighting:1 zipping:1 formula:1 removing:1 paskov:2 magenta:1 unigram:1 experimented:1 svm:1 alt:7 concern:1 essential:1 sequential:2 magnitude:5 push:1 cursor:1 browsing:1 gap:1 easier:1 chen:1 rayleigh:1 explore:1 glmnet:1 ordered:2 kdj:2 newsweeder:1 sport:4 kwj:2 applies:1 corresponds:1 satisfies:1 extracted:2 ma:1 sorted:1 formulated:1 marked:1 exposition:1 sculley:3 towards:1 admm:6 hard:1 specifically:1 typical:2 except:1 operates:1 wt:3 determined:1 uniformly:1 principal:5 duality:1 e:2 invariance:1 m3:1 newsgroup:2 indicating:3 rarely:1 formally:1 select:2 cholesky:2 support:1 latter:3 scan:1 brevity:1 disallowing:1 d1:3 |
4,346 | 4,933 | Pass-Efficient Unsupervised Feature Selection
Haim Schweitzer
Department of Computer Science
The University of Texas at Dallas
[email protected]
Crystal Maung
Department of Computer Science
The University of Texas at Dallas
[email protected]
Abstract
The goal of unsupervised feature selection is to identify a small number of important features that can represent the data. We propose a new algorithm, a modification of the classical pivoted QR algorithm of Businger and Golub, that requires a
small number of passes over the data. The improvements are based on two ideas:
keeping track of multiple features in each pass, and skipping calculations that can
be shown not to affect the final selection. Our algorithm selects the exact same
features as the classical pivoted QR algorithm, and has the same favorable numerical stability. We describe experiments on real-world datasets which sometimes
show improvements of several orders of magnitude over the classical algorithm.
These results appear to be competitive with recently proposed randomized algorithms in terms of pass efficiency and run time. On the other hand, the randomized
algorithms may produce more accurate features, at the cost of small probability of
failure.
1
Introduction
Work on unsupervised feature selection has received considerable attention. See, e.g., [1, 2, 3, 4,
5, 6, 7, 8] . In numerical linear algebra unsupervised feature selection is known as the column
subset selection problem, where one attempts to identify a small subset of matrix columns that can
approximate the entire column space of the matrix. See, e.g., [9, Chapter 12]. The distinction
between supervised and unsupervised feature selection is as follows. In the supervised case one
is given labeled objects as training data and features are selected to help predict that label; in the
unsupervised case nothing is known about the labels.
We describe an improvement to the classical Businger and Golub pivoted QR algorithm [9, 10]. We
refer to the original algorithm as the QRP, and to our improved algorithm as the IQRP. The QRP
selects features one by one, using k passes in order to select k features. In each pass the selected
feature is the one that is the hardest to approximate by the previously selected features. We achieve
improvements to the algorithm run time and pass efficiency without affecting the selection and the
excellent numerical stability of the original algorithm. Our algorithm is deterministic, and runs in a
small number of passes over the data. It is based on the following two ideas:
1. In each pass we identify multiple features that are hard to approximate with the previously
selected features. A second selection step among these features uses an upper bound on
unselected features that enables identifying multiple features that are guaranteed to have
been selected by the QRP. See Section 4 for details.
2. Since the error of approximating a feature can only decrease when additional features are
added to the selection, there is no need to evaluate candidates with error that is already ?too
small?. This allows a significant reduction in the number of candidate features that need to
be considered in each pass. See Section 4 for details.
1
2
Algorithms for unsupervised feature selection
The algorithms that we consider take as input large matrices of numeric values. We denote by m
the number of rows, by n the number of columns (features), and by k the number of features to be
selected. Criteria for evaluating algorithms include their run time and memory requirements, the
number of passes over the data, and the algorithm accuracy. The accuracy is a measure of the error
of approximating the entire data matrix as a linear combination of the selection. We review some
classical and recent algorithms for unsupervised feature selection.
2.1
Related work in numerical linear algebra
Businger and Golub QRP was established by Businger and Golub [9, 10]. We discuss it in detail
in Section 3. It requires k passes for selecting k features, and its run time is 4kmn ? 2k 2 (m + n) +
4k 3 /3. A recent study [11] compares experimentally the accuracy of the QRP as a feature selection
algorithm to some recently proposed state-of-the-art algorithms. Even though the accuracy of the
QRP is somewhat below the other algorithms, the results are quite similar. (The only exception was
the performance on the Kahan matrix, where the QRP was much less accurate.)
Gu and Eisenstat algorithm [1] was considered the most accurate prior to the work on randomized
algorithms that had started with [12]. It computes an initial selection (typically by using the QRP),
and then repeatedly swaps selected columns with unselected column. The swapping is done so that
the product of singular values of the matrix formed by the selected columns is increased with each
swapping. The algorithm requires random access memory, and it is not clear how to implement it
by a series of passes over the data. Its run time is O(m2 n).
2.2
Randomized algorithms
Randomized algorithms come with a small probability of failure, but otherwise appear to be more
accurate than the classical deterministic algorithms. Frieze et al [12, 13] have proposed a randomized
algorithm that requires only two passes over the data. This assumes that the norms of all matrix
columns are known in advance, and guarantees only an additive approximation error. We discuss
the run time and the accuracy of several generalizations that followed their studies.
Volume sampling Deshpande et al [14] have studied a randomized algorithm that samples k-tuples
of columns with probability proportional to their ?volume?. The volume is the square of the product
of the singular values of the submatrix formed by these columns. They show that this sampling
scheme gives rise to a randomized algorithm that computes the best possible solution in the Frobenius norm. They describe an efficient O(kmn) randomized algorithm that can be implemented in k
passes and approximates this sampling scheme. These results were improved (in terms of accuracy)
in [15], by computing the exact volume sampling. The resulting algorithm is slower but much more
accurate. Further improvements to the speed of volume sampling in [6] have reduced the run time
complexity to O(km2 n). As shown in [15, 6], this optimal (in terms of accuracy) algorithm can also
be derandomized, with a deterministic run time of O(km3 n).
Leverage sampling The idea behind leverage sampling is to randomly select features with probability proportional to their ?leverage?. Leverage values are norms of the rows of the n ? k right
eigenvector matrix in the truncated SVD expansion of the data matrix. See [16, 2]. In particular,
the ?two stage? algorithm described in [2] requires only 2 passes if the leverage values are known.
Its run time is dominated by the calculation of the leverage values. To the best of our knowledge
the currently best algorithms for estimating leverage values are randomized [17, 18]. One run takes
2 passes and O(mn log n + m3 ) time. This is dominated by the mn term, and [18] show that it
can be further reduced to the number of nonzero values. We note that these algorithms do not compute reliable leverage in 2 passes, since they may fail at a relatively high (e.g., 1/3) probability. As
stated in [18] ?the success probability can be amplified by independent repetition and taking the
coordinate-wise median?. Therefore, accurate estimates of leverage can be computed in constant
number of passes. But the constant would be larger than 2.
2
Input: The features (matrix columns) x1 , . . . , xn , and an integer k ? n.
Output: An ordered list S of k indices.
1.
In the initial pass compute:
1.1. For i = 1, . . . , n set x?i = xi , vi = |?
xi |2 . (?
xi is the error vector of
approximating xi by a linear combination of the columns in S.)
At the end of the pass set z1 = arg max vi , and initialize S = (z1 ).
i
2.
For each pass j = 2, . . . , k:
2.1. For i = 1, . . . , n set vi to the square error of
approximating xi by a linear combination of the columns in S.
At the end of pass j set zj = arg max vi , and add zj to S.
i
Figure 1: The main steps of the QRP algorithm.
2.3
Randomized ID
In a recent survey [19] Halko et.al. describe how to compute QR factorization using their randomized Interpolative Decomposition. Their approach produces an accurate Q as a basis of the data
matrix column space. They propose an efficient ?row extraction? method for computing R, that
works when k, the desired rank, is similar to the rank of the data matrix. Otherwise the row extraction introduces unacceptable inaccuracies, which led Halko et.al to recommend using an alternative
O(kmn) technique in such cases.
2.4
Our result, the complexity of the IQRP
The savings that the IQRP achieves depend on the data. The algorithm takes as input an integer value
l, the length of a temporary buffer. As explained in Section 4 our implementation requires temporary
storage of l + 1, which takes (l + 1)m floats. The following values depend on the data: the number
of passes p, the number of IO-passes q (explained below), and a unit cost of orthogonalization c (see
Section 4.3).
In terms of l and c the run time is 2mn + 4mnc + 4mlk. Our experiments show that for typical
datasets the value of c is below k. For l ? k our experiments show that the number of passes is
typically much smaller than k. The number of passes is even smaller if one considers IO-passes. To
explain what we mean by IO-passes consider as an example a situation where the algorithm runs
three passes over the data. In the first pass all n features are being accessed. In the second, only two
features are being accessed. In the third, only one feature is being accessed. In this case we take
the number of IO-passes to be q=1+ n3 . We believe that q is a relevant measure of the algorithm pass
complexity when skipping is cheap, so that the cost of a pass over the data is the amount of data that
needs to be read.
3
The Businger Golub algorithm (QRP)
In this section we describe the QRP [9, 10] which forms the basis to the IQRP. The main steps
are described in Figure 1. There are two standard implementations for Step 2.1 in Figure 1. The
first is by means of the ?Modified Gram-Schmidt? (e.g., [9]), and the second is by Householder
orthogonalization (e.g., [9]). Both methods require approximately the same number of flops, but
error analysis (see [9]) shows that the Householder approach is significantly more stable.
3.1
Memory-efficient implementations
The implementations shown in Figure 2 update the memory where the matrix A is stored. Specifically, A is overwritten by the R component of the QR factorization. Since we are not interested in
R, overwriting A may not be acceptable. The procedure shown in Figure 3 does not overwrite A,
but it is more costly. The flops count is dominated by Steps 1 and 2, which cost at most 4(j ? 1)mn
at pass j. Summing up for j = 1, . . . , k this gives a total flops count of approximately 2k 2 mn flops.
3
Compute zj , qj , Qj
for i = 1, . . . , n
T
1. wi = qj?1
x
?i .
2. x
?i ? x
?i ? wi qj?1 .
3. vi ? vi ? wi2 .
At the end of the pass:
4. zj = arg max vi .
Compute zj , hj , Hj
for i = 1, . . . , n
1. x
?i ? hj?1 x
?i .
2. wi = x
?i (j) (the j?th coordinate of x
?i ).
3. vi ? vi ? wi2 .
At the end of the pass:
4. zj = arg max vi .
5. qj = xzj /|xzj |.
6. Qj = (Qj?1 , qj ).
5. Create the Householder matrix hj from x
?j .
6. Hj = Hj?1 hj .
Modified Gram-Schmidt
Householder orthogonalization
i
i
Figure 2: Standard implementations of Step 2.1 of the QRP
Compute zj , qj , Qj
for i = 1, . . . , n
1. wi = QTj?1 xi .
2. vi = |xi |2 ? |wi |2 .
At the end of the pass:
3. zj = arg max vi .
Compute zj , hj , Hj
for i = 1, . . . , n
1. yi = Hj?1 xi .
Pm
2. vi = t=j+1 yi2 (t).
At the end of the pass:
3. zj = arg max vi .
4. q?j = xzj ? Qj?1 wzj , qj = q?j /|?
qj |.
5. Qj = (Qj?1 , qj ).
4. Create hj from yzj .
5. Hj = Hj?1 hj .
Modified Gram-Schmidt
Householder orthogonalization
i
i
Figure 3: Memory-efficient implementations of Step 2.1 of the QRP
4
The IQRP algorithm
In this section we describe our main result: the improved QRP. The algorithm maintains three ordered lists of columns: The list F is the input list containing all columns. The list S contains
columns that have already been selected. The list L is of size l, where l is a user defined parameter.
For each column xi in F the algorithm maintains an integer value ri and a real value vi . These
values can be kept in core or a secondary memory. They are defined as follows:
ri ? |S|,
vi = vi (ri ) = kxi ? Qri QTri xi k2
(1)
where Qri = (q1 , . . . , qri ) is an orthonormal basis to the first ri columns in S. Thus, vi (ri ) is
the (squared) error of approximating xi with the first ri columns in S. In each pass the algorithm
identifies the l candidate columns xi corresponding to the l largest values of vi (|S|). That is, the vi
values are computed as the error of predicting each candidate by all columns currently in S. The
identified l columns with the largest vi (|S|) are stored in the list L. In addition, the value of the
l+1?th largest vi (|S|) is kept as the constant BF . Thus, after a pass is terminated the following
condition holds:
v? (r? ) ? BF for all x? ? F \ L.
(2)
The list L and the value BF can be calculated in one pass using a binary heap data structure, with
the cost of at most n log(l + 1) comparisons. See [20, Chapter 9]. The main steps of the algorithm
are described in Figure 4.
Details of Steps 2.0, 2.1 of the IQRP. The threshold value T is defined by:
??
if the heap is not full.
T =
top of the heap if the heap is full.
4
(3)
Input: The matrix columns (features) x1 , . . . , xn , and an integer k ? n.
Output: An ordered list S of k indices.
1.
(The initial pass over F .)
1.0. Create a min-heap of size l+1.
In one pass go over xi , i = 1, . . . , n:
1.1. Set vi (0) = |xi |2 , ri = 0.
Fill the heap with the candidates corresponding to the l+1 largest vi (0).
1.2. At the end of the pass:
Set BF to the value at the top of the heap.
Set L to heap content excluding the top element.
Add to S as many candidates from L as possible using BF .
2.
Repeat until S has k candidates:
2.0. Create a min-heap of size l+1.
Let T be defined by (3).
In one pass go over xi , i = 1, . . . , n:
2.1. Skip xi if vi (ri ) ? T . Otherwise update vi , ri , heap.
2.2. At the end of the pass:
Set BF = T .
Set L to heap content excluding the top element.
Add to S as many candidates from L as possible using BF .
Figure 4: The main steps of the IQRP algorithm.
Thus, when the heap is full, T is the value of v associated with the l+1?th largest candidate encountered so far. The details of Step 2.1 are shown in Figure 5. Step A.2.2.1 can be computed using
either Gram-Schmidt or Householder, as shown in Figures 3 and 4.
A.1. If vi (ri ) ? T skip xi .
A.2. Otherwise check ri :
A.2.1. If ri = |S| conditionally insert xi into the heap.
A.2.2. If ri < |S|:
A.2.2.1. Compute vi (|S|). Set ri = |S|.
A.2.2.2. Conditionally insert xi into the heap.
Figure 5: Details of Step 2.1
Details of Steps 1.2 and 2.2 of the IQRP. Here we are given the list L and the value of BF
satisfying (2). To move candidates from L to S run the QRP on L as long as the pivot value is above
BF . (The pivot value is the largest value of vi (|S|) in L.) The details are shown in Figure 6.
B.1. z = arg max vi (|S|)
i?L
B.2. If vz (|S|) < BF , we are done exploiting L.
B.3. Otherwise:
B.3.1. Move z from L to S.
B.3.2. Update the remaining candidates in L using either Gram-Schmidt or
the Householder procedure.
For example, with Householder:
B.3.2.1. Create the Householder matrix hj from xz .
B.3.2.2. for all x in L replace x with hj x.
Figure 6: Details of Steps 1.2 and 2.2
5
4.1
Correctness
In this section we show that the IQRP computes the same selection as the QRP. The proof
is by induction on j, the number of columns in S. For j = 0 the QRP selects xj with
vj = |xj |2 = max |xi |2 . The IQRP selects vj0 as the largest among the l largest values in F . Therei
fore vj0 = maxxi ?L |xi |2 = maxxi ?F |xi |2 = vj . Now assume that for j = |S| the QRP and the
IQRP select the same columns in S (this is the inductive assumption). Let vj (|S|) be the value of
the j+1?th selection by the QRP, and let vj0 (|S|) be the value of the j+1?th selection by the IQRP. We
need to show that vj0 (|S|) = vj (|S|). The QRP selection of j satisfies: vj (|S|) = maxxi ?F vi (|S|).
Observe that if xi ? L then ri = |S|. (Initially L is created from the heap elements that have
ri = |S|. Once S is increased in Step B.3.1 the columns in L are updated according to B.3.2 so that
they all satisfy ri = |S|.) The IQRP selection satisfies:
vj0 (|S|) = max vi (|S|) and vj0 (|S|) ? BF .
(4)
xi ?L
Additionally for all x? ? F \ L:
BF ? v? (r? ) ? v? (|S|).
(5)
This follows from (2), the observation that v? (r) is monotonically decreasing in r, and r? ? |S|.
Therefore, combining (4), and (5) we get
vj0 (|S|) = max vi (|S|) = vj (|S|).
xi ?F
This completes the proof by induction.
4.2
Termination
To see that the algorithm terminates it is enough to observe that at least one column is selected in
each pass. The condition at Step B.2 in Figure 6 cannot hold at the first time in a new L. The value
of BF is the l+1 largest vi (|S|), while the maximum at B.1 is among the l largest vi (|S|).
4.3
Complexity
The formulas in this section describe the complexity of the IQRP in terms of the following:
n
k
p
c
m
l
q
the number of features (matrix columns)
the number of selected features
number of passes
a unit cost of orthogonalizing F
the number of objects (matrix rows)
user provided parameter. 1 ? l ? n
number of IO-passes
The value of c depends on the implementation of Step A.2.2.1 in Figure 5. We write cmemory for the
value of c in the memory-efficient implementation, and cflops for the faster implementation (in terms
of flops). We use the following notation. At pass j the number of selected columns is kj , and the
number of columns that were not skipped in Step 2.1 of the IQRP (same as Step A.1) is nj .
The number of flops in the memory-efficient implementation can be shown to be
flopsmemory = 2mn + 4mnc + 4mlk,
where c =
j?1
p
X
nj X
j=2
n
kj 0
(6)
j 0 =1
Observe that c ? k 2 /2, so that for l < n the worst case behavior is the same as the memoryoptimized QRP algorithm, which is O(k 2 mn). We show in Section 5 that the typical run time is
much faster. In particular, the dependency on k appears to be linear and not quadratic.
For the faster implementation that overwrites the input it can be shown that:
flopstime = 2mn + 4m
n
X
r?i ,
where r?i is the value of ri at termination.
(7)
i=1
Since r?i ? k ? 1 it follows that flopstime ? 4kmn. Thus, the worst case behavior is the same as the
flops-efficient QRP algorithm.
6
Memory in the memory-efficient implementation requires km in-core floats, and additional memory
for the heap, that can be reused for the list L. Additional memory to store and manipulate vi , ri
for i = 1, . . . , n is roughly 2n floats. Observe that these memory locations are being accessed
consecutively, and can be efficiently stored and manipulated out-of-core. The data itself, the matrix
A, is stored out-of-core. When the method of Figure 3 is used in A.2.2.1, these matrix values are
read-only.
IO-passes. We wish to distinguish between a pass where the entire data is accessed and a pass where
mostP
of the data is skipped.
Pp n This suggests the following definition for the number of IO-passes:
n
p
q = j=1 nj = 1 + j=2 nj .
Number of floating point comparisons. Testing for the skipping and manipulating the heap requires
floating point comparisons. The number of comparisons is n(p ? 1 + (q ? 1) log2 (l + 1)). This
does not affect the asymptotic complexity since the number of flops is much larger.
5
Experimental results
We describe results on several commonly used datasets. ?Day1?, with m = 20, 000 and n =
3, 231, 957 is part of the ?URL reputation? collection at the UCI Repository. ?thrombin?, with
m = 1, 909 and n = 139, 351 is the data used in KDD Cup 2001. ?Amazon?, with m = 1, 500
and n = 10, 000 is part of the ?Amazon Commerce reviews set? and was obtained from the UCI
Repository. ?gisette?, with m = 6, 000 and n = 5, 000 was used in NIPS 2003 selection challenge.
Measurements. We vary k, and report the following: flopsmemory , flopstime are the ratios between
the number of flops used by the IQRP and kmn, for the memory-efficient orthogonalization and
the time-efficient orthogonalization. # passes is the number of passes needed to select k features.
# IO-passes is discussed in sections 2.4 and 4.3. It is the number of times that the entire data is read.
Thus, the ratio between the number of IO-passes and the number of passes is the fraction of the data
that was not skipped.
Run time. The number of flops of the QRP is between 2kmn and 4kmn. We describe experiments
with the list size l taken as l = k. For Day1 the number of flops beats the QRP by a factor of more
than 100. For the other datasets the results are not as impressive. There are still significant savings
for small and moderate values of k (say up to k = 600), but for larger values the savings are smaller.
Most interesting is the observation that the memory-efficient implementation of Step 2.1 is not much
slower than the optimization for time. Recall that the memory-optimized QRP is k times slower than
the time-optimized QRP. In our experiments they differ by no more than a factor of 4.
Number of passes. We describe experiments with the list size l taken as l = k, and also with
l = 100 regardless of the value of k. The QRP takes k passes for selecting k features. For the
Day1 dataset we observed a reduction by a factor of between 50 to 250 in the number of passes. For
IO-passes, the reduction goes up to a factor of almost 1000. Similar improvements are observed for
the Amazon and the gisette datasets. For the thrombin it is slightly worse, typically a reduction by
a factor of about 70. The number of IO-passes is always significantly below the number of passes,
giving a reduction by factors up to 1000. For the recommended setting of l = k we observed the
following. In absolute terms the number of passes was below 10 for most of the data; the number of
IO-passes was below 2 for most of the data.
6
Concluding remarks
This paper describes a new algorithm for unsupervised feature selection. Based on the experiments
we recommend using the memory-efficient implementation and setting the parameter l = k. As
explained earlier the algorithm maintains 2 numbers for each column, and these can also be kept
in-core. This gives a 2(km + n) memory footprint.
Our experiments show that for typical datasets the number of passes is significantly smaller than
k. In situations where memory can be skipped the notion of IO-passes may be more accurate than
passes. IO-passes indicate the amount of data that was actually read and not skipped.
7
Day1, m = 20, 000, n = 3, 231, 957
?10?2
flopsmemory
flopstime
2
4
3
2
#passes
#IO-passes
20
number of passes
3
flops/kmn
#passes
#IO-passes
5
number of passes
4
15
10
5
1
1
0
200
400
600
800
0
0
1,000
200
400
600
800
1,000
0
200
k (l = k)
k (l = k)
400
600
800
1,000
800
1,000
k (l = 100)
thrombin, m = 1, 909, n = 139, 351
flopsmemory
flopstime
#passes
#IO-passes
#passes
#IO-passes
15
2
1
0
number of passes
number of passes
flops/kmn
3
10
5
200
400
600
800
1,000
20
0
0
0
40
0
200
400
600
800
1,000
0
200
k (l = k)
k (l = k)
400
600
k (l = 100)
Amazon, m = 1, 500, n = 10, 000
flopsmemory
flopstime
number of passes
3
flops/kmn
#passes
#IO-passes
5
2
1
#passes
#IO-passes
15
4
number of passes
4
3
2
10
5
1
0
0
0
200
400
600
800
1,000
0
200
400
k (l = k)
600
800
1,000
0
200
k (l = k)
400
600
k (l = 100)
gisette, m = 6, 000, n = 5, 000
3
flopsmemory
flopstime
#passes
#IO-passes
5
2
1.5
1
4
3
2
#passes
#IO-passes
15
number of passes
number of passes
flops/kmn
2.5
10
5
0.5
1
0
0
200
400
600
800
1,000
0
200
400
k (l = k)
600
k (l = k)
800
1,000
0
200
400
600
800
1,000
k (l = 100)
Figure 7: Results of applying the IQRP to several datasets with varying k, and l = k.
The performance of the IQRP depends on the data. Therefore, the improvements that we observe
can also be viewed as an indication that typical datasets are ?easy?. This appears to suggest that
worst case analysis should not be considered as the only criterion for evaluating feature selection
algorithms. Comparing the IQRP to the current state-of-the-art randomized algorithms that were
reviewed in Section 2.2 we observe that the IQRP is competitive in terms of the number of passes
and appears to outperform these algorithms in terms of the number of IO-passes. On the other hand,
it may be less accurate.
8
References
[1] M. Gu and S. C. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization.
SIAM J. Computing, 17(4):848?869, 1996.
[2] C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the column
subset selection problem. In Claire Mathieu, editor, Proceedings of the Twentieth Annual ACM-SIAM
Symposium on Discrete Algorithms, SODA 2009, New York, NY, USA, January 4-6, 2009, pages 968?
977. SIAM, 2009.
[3] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column-based matrix reconstruction,
February 2011. arXiv e-print (arXiv:1103.0995).
[4] A. Dasgupta, P. Drineas, B. Harb, V. Josifovski, and M. W. Mahoney. Feature selection methods for text
classification. In Pavel Berkhin, Rich Caruana, and Xindong Wu, editors, KDD, pages 230?239. ACM,
2007.
[5] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Sparse features for PCA-like linear regression. In John
Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando C. N. Pereira, and Kilian Q. Weinberger,
editors, NIPS, pages 2285?2293, 2011.
[6] V. Guruswami and A. K. Sinop. Optimal column-based low-rank matrix reconstruction. In Yuval Rabani,
editor, Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA
2012, Kyoto, Japan, January 17-19, 2012, pages 1207?1214. SIAM, 2012.
[7] Z. Li, Y. Yang, J. Liu, X. Zhou, and H. Lu. Unsupervised feature selection using nonnegative spectral
analysis. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22-26, 2012,
Toronto, Ontario, Canada. AAAI Press, 2012.
[8] S. Zhang, H.S. Wong, Y. Shen, and D. Xie. A new unsupervised feature ranking method for gene expression data based on consensus affinity. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(4):1257?1263, July 2012.
[9] G. H. Golub and C. F. Van-Loan. Matrix computations. The Johns Hopkins University Press, third edition,
1996.
[10] P. Businger and G. H. Golub. Linear least squares solutions by Householder transformations. Numer.
Math., 7:269?276, 1965.
[11] A. C
? ivril and M. Magdon-Ismail. Column subset selection via sparse approximation of SVD. Theoretical
Computer Science, 421:1?14, March 2012.
[12] A. M. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations. In IEEE Symposium on Foundations of Computer Science, pages 370?378, 1998.
[13] A. M. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations. Journal of the ACM, 51(6):1025?1041, 2004.
[14] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. Matrix approximation and projective clustering
via volume sampling. Theory of Computing, 2(12):225?247, 2006.
[15] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection. In FOCS,
pages 329?338. IEEE Computer Society Press, 2010.
[16] M. W. Mahoney and P. Drineas. CU R matrix decompositions for improved data analysis. Proceedings
of the National Academy of Sciences, 106(3):697?702, 2009.
[17] P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix
coherence and statistical leverage. Journal of Machine Learning Research, 13:3441?3472, 2012.
[18] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. arXiv
e-print (arXiv:1207.6365v4), April 2013.
[19] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms
for constructing approximate matrix decompositions. SIAM Review, 53(2):217?288, 2011.
[20] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to algorithms. MIT Press and McGraw-Hill
Book Company, third edition, 2009.
9
| 4933 |@word repository:2 cu:1 norm:3 bf:13 reused:1 termination:2 km:2 overwritten:1 decomposition:3 pavel:1 q1:1 mlk:2 reduction:5 initial:3 liu:1 series:1 contains:1 selecting:2 woodruff:2 current:1 com:1 comparing:1 skipping:3 gmail:1 john:2 numerical:4 additive:1 kdd:2 enables:1 cheap:1 update:3 intelligence:1 selected:12 core:5 math:1 location:1 toronto:1 accessed:5 zhang:1 unacceptable:1 schweitzer:1 symposium:3 focs:1 roughly:1 mnc:2 xz:1 behavior:2 decreasing:1 company:1 provided:1 estimating:1 notation:1 gisette:3 rivest:1 what:1 eigenvector:1 finding:3 transformation:1 nj:4 guarantee:1 k2:1 unit:2 appear:2 sinop:1 dallas:2 io:23 id:1 approximately:2 studied:1 suggests:1 josifovski:1 factorization:3 projective:1 commerce:1 testing:1 implement:1 footprint:1 procedure:2 significantly:3 revealing:1 suggest:1 get:1 cannot:1 selection:28 storage:1 applying:1 wong:1 deterministic:3 go:3 attention:1 regardless:1 survey:1 shen:1 amazon:4 identifying:1 eisenstat:2 m2:1 qtj:1 orthonormal:1 fill:1 stability:2 notion:1 coordinate:2 overwrite:1 updated:1 user:2 exact:2 us:1 element:3 satisfying:1 labeled:1 observed:3 wang:1 worst:3 kilian:1 decrease:1 complexity:6 depend:2 algebra:2 efficiency:2 swap:1 gu:2 basis:3 drineas:6 chapter:2 fast:3 describe:10 monte:2 artificial:1 zemel:1 quite:1 larger:3 say:1 otherwise:5 kahan:1 itself:1 final:1 indication:1 propose:2 reconstruction:2 product:2 km2:1 relevant:1 combining:1 uci:2 achieve:1 amplified:1 ontario:1 academy:1 ismail:4 frobenius:1 qr:6 exploiting:1 requirement:1 rademacher:2 produce:2 object:2 help:1 utdallas:1 received:1 strong:1 implemented:1 skip:2 come:1 indicate:1 differ:1 consecutively:1 require:1 generalization:1 insert:2 hold:2 considered:3 predict:1 achieves:1 vary:1 heap:17 pivoted:3 favorable:1 label:2 currently:2 largest:10 vz:1 repetition:1 create:5 correctness:1 mit:1 overwriting:1 always:1 modified:3 zhou:1 hj:16 varying:1 improvement:7 rank:7 check:1 skipped:5 entire:4 typically:3 initially:1 manipulating:1 selects:4 interested:1 arg:7 among:3 classification:1 art:2 initialize:1 once:1 saving:3 extraction:2 sampling:9 biology:1 hardest:1 unsupervised:11 report:1 recommend:2 richard:1 xzj:3 randomly:1 frieze:3 manipulated:1 national:1 floating:2 attempt:1 leiserson:1 golub:7 mahoney:4 derandomized:1 introduces:1 numer:1 swapping:2 behind:1 accurate:9 taylor:1 desired:1 theoretical:1 increased:2 column:37 earlier:1 caruana:1 cost:6 subset:5 too:1 stored:4 dependency:1 kxi:1 randomized:13 siam:6 v4:1 probabilistic:1 hopkins:1 squared:1 aaai:2 containing:1 worse:1 book:1 li:1 japan:1 satisfy:1 ranking:1 vi:36 depends:2 competitive:2 maintains:3 formed:2 square:3 accuracy:7 efficiently:1 identify:3 lu:1 carlo:2 fore:1 randomness:1 qrp:27 explain:1 definition:1 sixth:1 failure:2 boutsidis:3 pp:1 deshpande:3 berkhin:1 associated:1 proof:2 dataset:1 recall:1 knowledge:1 actually:1 appears:3 supervised:2 xie:1 improved:5 april:1 done:2 though:1 stage:1 until:1 hand:2 tropp:1 xindong:1 vj0:7 believe:1 usa:1 inductive:1 read:4 nonzero:1 conditionally:2 criterion:2 hill:1 crystal:2 orthogonalization:6 wise:1 recently:2 volume:7 discussed:1 martinsson:1 approximates:1 refer:1 significant:2 measurement:1 cup:1 pm:1 shawe:1 had:1 harb:1 access:1 stable:1 impressive:1 add:3 recent:3 moderate:1 store:1 buffer:1 binary:1 success:1 yi:1 additional:3 somewhat:1 fernando:1 monotonically:1 recommended:1 july:2 multiple:3 full:3 kyoto:1 faster:3 calculation:2 long:1 manipulate:1 regression:2 arxiv:4 represent:1 sometimes:1 affecting:1 addition:1 completes:1 singular:2 median:1 float:3 pass:67 integer:4 near:1 leverage:10 yang:1 enough:1 easy:1 affect:2 xj:2 identified:1 idea:3 texas:2 qj:16 pivot:2 expression:1 pca:1 bartlett:1 url:1 guruswami:1 clarkson:1 peter:1 york:1 repeatedly:1 remark:1 clear:1 amount:2 reduced:2 outperform:1 zj:10 track:1 write:1 discrete:2 dasgupta:1 interpolative:1 threshold:1 kept:3 fraction:1 run:16 soda:2 almost:1 wu:1 coherence:1 acceptable:1 submatrix:1 bound:1 haim:1 guaranteed:1 followed:1 distinguish:1 quadratic:1 encountered:1 annual:2 nonnegative:1 n3:1 ri:19 dominated:3 speed:1 rabani:1 min:2 concluding:1 vempala:3 relatively:1 department:2 according:1 combination:3 march:1 cormen:1 smaller:4 terminates:1 slightly:1 describes:1 wi:5 modification:1 explained:3 taken:2 previously:2 discus:2 count:2 fail:1 needed:1 overwrites:1 end:8 magdon:4 observe:6 spectral:1 alternative:1 schmidt:5 weinberger:1 slower:3 original:2 assumes:1 top:4 include:1 remaining:1 clustering:1 log2:1 giving:1 approximating:5 classical:6 february:1 society:1 move:2 added:1 already:2 print:2 costly:1 affinity:1 considers:1 consensus:1 induction:2 kannan:2 length:1 index:2 ratio:2 stated:1 rise:1 implementation:14 twenty:2 upper:1 observation:2 datasets:8 truncated:1 beat:1 situation:2 flop:15 excluding:2 january:2 householder:10 canada:1 z1:2 optimized:2 distinction:1 temporary:2 established:1 inaccuracy:1 nip:2 below:6 maung:2 wi2:2 sparsity:1 challenge:1 reliable:1 memory:19 max:10 predicting:1 mn:8 scheme:2 mathieu:1 unselected:2 identifies:1 started:1 created:1 kj:2 text:1 review:3 prior:1 asymptotic:1 interesting:1 proportional:2 foundation:1 editor:4 row:6 claire:1 repeat:1 keeping:1 taking:1 absolute:1 sparse:2 van:1 calculated:1 xn:2 world:1 numeric:1 evaluating:2 computes:3 gram:5 rich:1 commonly:1 collection:1 thrombin:3 far:1 transaction:1 approximate:4 mcgraw:1 gene:1 summing:1 tuples:1 xi:25 reputation:1 reviewed:1 additionally:1 expansion:1 excellent:1 constructing:1 vj:6 main:5 yi2:1 terminated:1 kmn:11 edition:2 nothing:1 x1:2 ny:1 pereira:1 wish:1 candidate:11 ivril:1 third:4 maxxi:3 formula:1 list:13 qri:3 wzj:1 magnitude:1 orthogonalizing:1 halko:3 led:1 twentieth:1 ordered:3 satisfies:2 acm:5 goal:1 viewed:1 replace:1 considerable:1 hard:1 experimentally:1 content:2 typical:4 specifically:1 loan:1 yuval:1 total:1 pas:31 secondary:1 svd:2 experimental:1 m3:1 exception:1 select:4 bioinformatics:1 evaluate:1 |
4,347 | 4,934 | Better Approximation and Faster Algorithm Using
the Proximal Average
Yaoliang Yu
Department of Computing Science, University of Alberta, Edmonton AB T6G 2E8, Canada
[email protected]
Abstract
It is a common practice to approximate ?complicated? functions with more
friendly ones.
In large-scale machine learning applications, nonsmooth
losses/regularizers that entail great computational challenges are usually approximated by smooth functions. We re-examine this powerful methodology and point
out a nonsmooth approximation which simply pretends the linearity of the proximal map. The new approximation is justified using a recent convex analysis tool?
proximal average, and yields a novel proximal gradient algorithm that is strictly
better than the one based on smoothing, without incurring any extra overhead. Numerical experiments conducted on two important applications, overlapping group
lasso and graph-guided fused lasso, corroborate the theoretical claims.
1
Introduction
In many scientific areas, an important methodology that has withstood the test of time is the approximation of ?complicated? functions by those that are easier to handle. For instance, Taylor?s
expansion in calculus [1], essentially a polynomial approximation of differentiable functions, has
fundamentally changed analysis, and mathematics more broadly. Approximations are also ubiquitous in optimization algorithms, e.g. various gradient-type algorithms approximate the objective
function with a quadratic upper bound. In some (if not all) cases, there are multiple ways to make
the approximation, and one usually has this freedom of choice. It is perhaps not hard to convince
oneself that there is no approximation that would work best in all scenarios. And one would probably also agree that a specific form of approximation should be favored if it well suits our ultimate
goal. Despite of all these common-sense, in optimization algorithms, the smooth approximations are
still dominating, bypassing some recent advances on optimizing nonsmooth functions [2, 3]. Part of
the reason, we believe, is the lack of new technical tools.
We consider the composite minimization problem where the objective consists of a smooth loss function and a sum of nonsmooth functions. Such problems have received increasing attention due to the
arise of structured sparsity [4], notably the overlapping group lasso [5], the graph-guided fused lasso
[6] and some others. These structured regularizers, although greatly enhance our modeling capability, introduce significant new computational challenges as well. Popular gradient-type algorithms
dealing with such composite problems include the generic subgradient method [7], (accelerated)
proximal gradient (APG) [2, 3], and the smoothed accelerated proximal gradient (S-APG) [8]. The
subgradient method is applicable to any nonsmooth function, although the convergence rate is rather
slow. APG, being a recent advance, can handle simple functions [9] but for more complicated structured regularizers, an inner iterative procedure is needed, resulting in an overall convergence rate
that could be as slow as the subgradient method [10]. Lastly, S-APG simply runs APG on a smooth
approximation of the original objective, resulting in a much improved convergence rate.
Our work is inspired by the recent advance on nonsmooth optimization [2, 3], of which the building
block is the proximal map of the nonsmooth function. This proximal map is available in closed-form
1
for simple functions but can be quite expensive for more complicated functions such as a sum of
nonsmooth functions we consider here. A key observation we make is that oftentimes the proximal
map for each individual summand can be easily computed, therefore a bold idea is to simply use the
sum of proximal maps, pretending that the proximal map is a linear operator. Somewhat surprisingly,
this naive choice, when combined with APG, results in a novel proximal algorithm that is strictly
better than S-APG, while keeping per-step complexity unchanged. We justify our method via a
new tool from convex analysis?the proximal average [11]. In essence, instead of smoothing the
nonsmooth function, we use a nonsmooth approximation whose proximal map is cheap to evaluate,
after all this is all we need to run APG.
We formally state our problem in Section 2, along with the proposed algorithm. After recalling
the relevant tools from convex analysis in Section 3 we provide the theoretical justification of our
method in Section 4. Related works are discussed in Section 5. We test the proposed algorithm in
Section 6 and conclude in Section 7.
2
Problem Formulation
We are interested in solving the following composite minimization problem:
min `(x) + f?(x),
x?Rd
where
f?(x) =
K
X
?k fk (x).
(1)
k=1
Here
P ` is convex with L0 -Lipschitz continuous gradient w.r.t. the Euclidean norm k ? k, and ?k ?
0, k ?k = 1. The usual regularization constant that balances the two terms in (1) is absorbed into
the loss `. For the functions fk , we assume
Assumption 1. Each fk is convex and Mk -Lipschitz continuous w.r.t. the Euclidean norm k ? k.
The abbreviation M 2 =
PK
k=1
?k Mk2 is adopted throughout.
We are interested in the general case where the functions fk need not be differentiable. As mentioned in the introduction, a generic scheme that solves (1) is the subgradient method [7], of which
each step requires merely an arbitrary subgradient of the objective. With a suitable stepsize, the subgradient method converges1 in at most O(1/2 ) steps where > 0 is the desired accuracy. Although
being general, the subgradient method is exceedingly slow, making it unsuitable for many practical
applications.
Another recent algorithm for solving (1) is the (accelerated) proximal gradient (APG) [2, 3], of
which each iteration needs to compute the proximal map of the nonsmooth part f? in (1):
1/L0
Pf?
(x) = argmin L20 kx ? yk2 + f?(y).
y
(Recall that L0 is the Lipschitz constant of the gradient of the smooth part ` in (1).) Provided that
the proximal
map can be computed in constant time, it can be shown that APG converges within
?
O(1/ ) complexity, significantly better than the subgradient method. For some simple functions,
the proximal map indeed is available in closed-form, see [9] for a nice survey. However, for more
complicated functions such as the one we consider here, the proximal map itself is expensive to
compute and an inner iterative subroutine is required. Somewhat disappointingly, recent analysis
has shown that such a two-loop procedure can be as slow as the subgradient method [10].
Yet another approach, popularized by Nesterov [8], is to approximate each nonsmooth component
fk with a smooth function and then run APG. By carefully balancing the approximation and the
convergence requirement of APG,
p the smoothed accelerated proximal gradient (S-APG) proposed
in [8] converges in at most O( 1/2 + 1/) steps, again much better than the subgradient method.
The main point of this paper is to further improve S-APG, in perhaps a surprisingly simple way.
The key assumption that we will exploit is the following:
Assumption 2. Each proximal map P?fk can be computed ?easily? for any ? > 0.
1
In this paper we satisfy ourselves with convergence in terms of function values, although with additional
assumptions/efforts it is possible to argue for convergence in terms of the iterates.
2
Algorithm 1: PA-APG.
1: Initialize x0 = y1 , ?, ?1 = 1.
2: for t = 1, 2, . . . do
3:
zt = yP
t ? ??`(yt ),
4:
xt = k ?k ? P?fk (zt ),
?
1+ 1+4?t2
5:
?t+1 =
,
2
?t ?1
6:
yt+1 = xt + ?t+1 (xt ? xt?1 ).
7: end for
Algorithm 2: PA-PG.
1: Initialize x0 , ?.
2: for t = 1, 2, . . . do
3:
4:
zt = xt?1 ? ??`(xt?1 ),
P
xt = k ?k ? P?fk (zt ).
5: end for
We prefer to leave the exact meaning of ?easily? unspecified, but roughly speaking, the proximal
map should be no more expensive than computing the gradient of the smooth part ` so that it does
not become the bottleneck. Both Assumption 1 and Assumption 2 are satisfied in many important
applications (examples will follow). As it will also become clear later, these assumptions are exactly
those needed by S-APG.
Unfortunately, in general, there is no known efficient way that reduces the proximal map of the
average f? to the proximal maps of its individual components fk , therefore the fast scheme APG is
not readily applicable. The main difficulty, of course, is due to the nonlinearity of the proximal map
P?f , when treated as an operator on the function f . Despite of this fact, we will ?naively? pretend
that the proximal map is linear and use
?
P?f? ?
K
X
?k P?fk .
(2)
k=1
Under this approximation, the fast scheme APG can be applied. We give one particular realization
(PA-APG) in Algorithm 1 based on the FISTA in [2]. A simpler (though slower) version (PA-PG)
based on ISTA [2] is also provided in Algorithm 2. Clearly both algorithms are easily parallelizable
if K is large. We remark that any other variation of APG, e.g. [8], is equally well applicable. Of
course, when K = 1, our algorithm reduces to the corresponding APG scheme.
At this point, one might be suspicious about the usefulness of the ?naive? approximation in (2).
Before addressing this well-deserved question, let us first point out two important applications where
Assumption 1 and Assumption 2 are naturally satisfied.
Example 1 (Overlapping group lasso, [5]). In this example, fk (x) = kxgk k where gk is a group
(subset) of variables and xg denotes a copy of x with all variables not contained in the group g
set to 0. This group regularizer has been proven quite useful in high-dimensional statistics with the
capability of selecting meaningful groups of features [5]. In the general case where the groups could
overlap as needed, P?f? cannot be computed easily.
Clearly each fk is convex and 1-Lipschitz continuous w.r.t. k ? k, i.e., Mk = 1 in Assumption 1.
Moreover, the proximal map P?fk is simply a re-scaling of the variables in group gk , that is
xj ,
j 6? gk
,
(3)
[P?fk (x)]j =
(1 ? ?/kxgk k)+ xj , j ? gk
where (?)+ = max{?, 0}. Therefore, both of our assumptions are met.
Example 2 (Graph-guided fused lasso, [6]). This example is an enhanced version of the fused lasso
[12], with some graph structure exploited to improve feature selection in biostatistic applications
[6]. Specifically, given some graph whose nodes correspond to the feature variables, we let fij (x) =
|xi ?Pxj | for every edge (i, j) ? E.
P For a general graph, the proximal map of the regularizer
f? = (i,j)?E ?ij fij , with ?ij ? 0, (i,j)?E ?ij = 1, is not easily computable.
Similar as above, each fij is 1-Lipschitz continuous w.r.t. the Euclidean norm. Moreover, the
proximal map P?fij is easy to compute:
xs ,
s 6? {i, j}
?
[Pfij (x)]s =
.
(4)
xs ? sign(xi ? xj ) min{?, |xi ? xj |/2}, s ? {i, j}
Again, both our assumptions are satisfied.
3
Note that in both examples we could have incorporated weights into the component functions fk
or fij , which amounts to changing ?k or ?ij accordingly. We also remark that there are other
applications that fall into our consideration, but for illustration purposes we shall contend ourselves
with the above two examples. More conveniently, both examples have been tried with S-APG [13],
thus constitute a natural benchmark for our new algorithm.
3
Technical Tools
To justify our new algorithm, we need a few technical tools from convex analysis [14]. Let our
domain H be a real Hilbert space with the inner product h?, ?i and the induced norm k ? k. Denote ?0
as the set of all lower semicontinuous proper convex functions f : H ? R ? {?}. It is well-known
that the Fenchel conjugation
f ? (y) = sup hx, yi ? f (x)
x
? ?
is a bijection and involution on ?0 (i.e. (f ) = f ). For convenience, throughout we let q = 21 k ? k2
(q for ?quadratic?). Note that q is the only function which coincides with its Fenchel conjugate.
Another convention that we borrow from convex analysis is to write (f ?)(x) = ?f (??1 x) for
? > 0. We easily verify (?f )? = f ? ? and also (f ?)? = ?f ? .
For any f ? ?0 , we define its Moreau envelop (with parameter ? > 0) [14, 15]
1
M?f (x) = min 2?
kx ? yk2 + f (y),
y
(5)
and correspondingly the proximal map
1
kx ? yk2 + f (y).
P?f (x) = argmin 2?
(6)
y
Since f is closed convex and k ? k2 is strongly convex, the proximal map is well-defined and singlevalued. As mentioned before, the proximal map is the key component of fast schemes such as APG.
We summarize some nice properties of the Moreau envelop and the proximal map as:
Proposition 1. Let ?, ? > 0, f ? ?0 , and Id be the identity map, then
i). M?f ? ?0 and (M?f )? = f ? + ?q;
ii). M?f ? f , inf x M?f (x) = inf x f (x), and argminx M?f (x) = argminx f (x);
iii). M?f is differentiable with ?M?f =
1
? (Id
? P?f );
?
??
?
iv). M??f = ?M??
f and P?f = Pf = (Pf ??1 )?;
v). M?M? = Mf?+? and P?M? =
f
f
1/?
?
?+? Id
+
?+?
?
;
?+? Pf
1/?
vi). ?M?f + (Mf ? )? = q and P?f + (Pf ? )? = Id.
i) is the well-known duality between infimal convolution and summation. ii), albeit being trivial, is
the driving force behind the proximal point algorithm [16]. iii) justifies the ?niceness? of the Moreau
envelop and connects it with the proximal map. iv) and v) follow from simple algebra. And lastly
vi), known as Moreau?s identity [15], plays an important role in the early development of convex
analysis. We remind that (M?f )? in general is different from M?f? .
Fix ? > 0. Let SC? ? ?0 denote the class of ?-strongly convex functions, that is, functions f
such that f ? ?q is convex. Similarly, let SS? ? ?0 denote the class of finite-valued functions
whose gradient is ?-Lipschitz continuous (w.r.t. the norm k ? k). A well-known duality between
strong convexity and smoothness is that for f ? ?0 , we have f ? SC? iff f ? ? SS1/? , cf. [17,
Theorem 18.15]. Based on this duality, we have the next result which turns out to be critical. (Proof
in Appendix A)
Proposition 2. Fix ? > 0. The Moreau envelop map M? : ?0 ? SS1/? that sends f ? ?0 to M?f is
bijective, increasing, and concave on any convex subset of ?0 (under the pointwise order).
4
It is clear that SS1/? is a convex subset of ?0 , which motivates the definition of the proximal
PK
P
average?the key object to us. Fix constants ?k ? 0 with k=1 ?k = 1. Recall that f? = k ?k fk
with each fk ? ?0 , i.e. f? is the convex combination of the component functions {fk } under the
weight {?k }. Note that we always assume f? ? ?0 (the exception f? ? ? is clearly uninteresting).
?
Definition 1 (Proximal Average, [11, 15]). Denote f = (f1 , . . . , fK ) and f ? = (f1? , . . . , fK
). The
?
?
proximal average Af ,? , or simply A when the component functions and weights are clear from
PK
context, is the unique function h ? ?0 such that M?h = k=1 ?k M?fk .
Indeed, the existence of the proximal average follows from the surjectivity of M? while the uniqueness follows from the injectivity of M? , both proven in Proposition 2. The main property of the
proximal average, as seen from its definition, is that its Moreau envelop is the convex combination
of the Moreau envelops of the component functions. By iii) of Proposition 1 we immediately obtain
P?A? =
K
X
?k P?fk .
(7)
k=1
Recall that the right-hand side is exactly the approximation we employed in Section 2.
Interestingly, using the properties we summarized in Proposition 1, one can show that the Fenchel
conjugate of the proximal average, denoted as (A? )? , enjoys a similar property [11]:
K
K
h
i
X
X
1/?
?k (q ? ?M?fk )
?k M?fk =
M(A? )? ? = q ? ?M?A? = q ? ?
=
K
X
1/?
?k [(Mf ? )?] =
k=1
k=1
"
#
K
X
k
k=1
1/?
that is, M(A?
f ,? )
?
=
PK
k=1
1/?
?k Mf ?
?,
k
k=1
1/?
1/?
?k Mf ? = M
1/?
Af ? ,?
k
, therefore by the injective property established in
Proposition 2:
1/?
(A?f ,? )? = Af ? ,? .
(8)
From its definition it is also possible to derive an explicit formula for the proximal average (although
for our purpose only the existence is needed):
!?
K
K
X
?
?
X
1/?
?
?
Af ,? =
?k Mfk ? ?q
=
?k Mf ?
? q?,
(9)
k
k=1
k=1
where the second equality is obtained by conjugating (8) and applying the first equality to the conjugate. By the concavity and monotonicity of M? , we have the inequality
M?f? ?
K
X
?k M?fk = M?A? ?? f? ? A? .
(10)
k=1
The above results (after Definition 1) are due to [11], although our treatment is slightly different.
It is well-known that as ? ? 0, M?f ? f pointwise [14], which, under the Lipschitz assumption,
can be strengthened to uniform convergence (Proof in Appendix B):
2
Proposition 3. Under Assumption 1 we have 0 ? f? ? M?? ? ?M .
A
2
For the proximal average, [11] showed that A? ? f? pointwise, which again can be strengthened to
uniform convergence (proof follows from (10) and Proposition 3 since A? ? M?A? ):
Proposition 4. Under Assumption 1 we have 0 ? f? ? A? ?
?M 2
2 .
As it turns out, S-APG approximates the nonsmooth function f? with the smooth function M?A? while
our algorithm operates on the nonsmooth approximation A? (note that it can be shown that A? is
smooth iff some component fi is smooth). By (10) and ii) in Proposition 1 we have
(11)
M?? ? A? ? f?,
A
5
? = 0.5, ? =10
? = 0.5, ? =5
10
? = 0.5, ? =1
10
f1
f2
f?
8
10
f1
f2
f?
8
M f??
6
M f??
6
A?
M f??
6
A?
4
4
4
2
2
2
0
?10
?5
f1
f2
f?
8
0
5
0
?10
10
?5
0
5
0
?10
10
A?
?5
0
5
10
Figure 1: See Example 3 for context. As predicted M?A? ? A? ? f?. Observe that the proximal
average A? remains nondifferentiable at 0 while M?A? is smooth everywhere. For x ? 0, f1 = f2 =
f? = A? (the red circled line), thus the proximal average A? is a strictly tighter approximation than
smoothing. When ? is small (right panel), f? ? M?A? ? A? .
meaning that the proximal average A? is a better under-approximation of f? than M?A? .
Let us compare the proximal average A? with the smooth approximation M?A? on a 1-D example.
Example 3. Let f1 (x) = |x|, f2 (x) = max{x, 0}. Clearly both are 1-Lipschitz continuous. Moreover, P?f1 (x) = sign(x)(|x| ? ?)+ , P?f2 (x) = (x ? ?)+ + x ? (x)+ ,
?
( 2
?
x?0
?0,2
x
,
|x| ? ?
x
M?f1 (x) = 2?
, and M?f2 (x) = 2?
,
0?x??.
?
|x| ? ?/2, otherwise
?x ? ?/2, otherwise
Finally, using (9) we obtain (with ?1 = ?, ?2 = 1 ? ?)
?
?
x?0
?x,
?
? x2
A (x) = 1?? 2? ,
(? ? 1)? ? x ? 0 .
?
???x ? (1 ? ?) ?? , x ? (? ? 1)?
2
Figure 1 depicts the case ? = 0.5 with different values of the smoothing parameter ?.
4
Theoretical Justification
Given our development in the previous section, it is now clear that our proposed algorithm aims at
solving the approximation
min `(x) + A? (x).
(12)
x
The next important piece is to show how a careful choice of ? would lead to a strictly better convergence rate than S-APG.
Recall that using APG to slove (12) requires computing the following proximal map in each iteration:
1/L
PA? 0 (x) = argmin L20 kx ? yk2 + A? (y),
y
which, unfortunately, is not yet amenable to efficient computation, due to the mismatch of the constants 1/L0 and ? (recall that in the decomposition (7) the superscript and subscript must both be ?).
1/L
In general, there is no known explicit formula that would reduce Pf 0 to P?f for different positive
constants L0 and ? [17, p. 338], see also iv) in Proposition 1. Our fix is almost trivial: If necessary,
we use a bigger Lipschitz constant L0 = 1/? so that we can compute the proximal map easily. This
is indeed legitimate since L0 -Lipschitz implies L-Lipschitz for any L ? L0 . Said differently, all we
need is to tune down the stepsize a little bit in APG. We state formally the convergence property of
our algorithm as (Proof in Appendix C):
2
Theorem 1. Fix
q the accuracy > 0. Under Assumption 1 and the choice ? = min{1/L0 , 2/ M },
2
after at most ?
kx0 ? xk steps, the output of Algorithm 1, say x
?, satisfies
`(?
x) + f?(?
x) ? `(x) + f?(x) + 2.
The same guarantee holds for Algorithm 2 after at most
6
1
2? kx0
? xk2 steps.
1/L
?
0
Note that
pif we could reduce PA? efficiently to PA? , we would end up with the optimal (overall)
rate O( 1/), even though we approximate the nonsmooth function f? by the proximal average
A? . In other words, approximation itself does not lead to an inferior rate. It is our incapability to
(efficiently) relate proximal maps that leads to the sacrifice in convergence rates.
5
Discussions
To ease our discussion with related works, let us first point out a fact that is not always explicitly
recognized, that is, S-APG essentially relies on approximating the nonsmooth function f? with M?A? .
Indeed, consider first the case K = 1. The smoothing idea introduced in [8] purports the superficial
max-structure assumption, that is, f (x) = maxy?C hx, yi ? h(y) where C is some bounded convex
set and h ? ?0 . As it is well-known (also easily verified from definition), f ? ?0 is M -Lipschitz
continuous (w.r.t. the norm k ? k) iff dom f ? ? Bk?k (0, M ), the ball centered at the origin with
radius M . Thus the function f ? ?0 admits the max-structure iff it is Lipschitz continuous, i.e.,
satisfying our Assumption 1, in which case h = f ? and C = dom f ? . [8] proceeded to add some
?distance? function d to obtain the approximation f? (x) = maxy?C hx, yi ? f ? (y) ? ?d(y). For
simplicity, we will only consider d = q, thus f? = (f ? + ?q)? = M?f . The other assumption of
S-APG [8] is that f? and the maximizer in its expression can be easily computed, which is precisely
our Assumption 2. Finally for the general case where f? is an average of K nonsmooth functions, the
smoothing technique is applied in a component by component way, i.e., approximate f? with M?A? .
Forqcomparison, let us recall that S-APG finds a 2 accurate solution in at most
p
O( L0 + M 2 /(2) 1/) steps since the Lipschitz constant of the gradient of ` + M?A? is upper bounded byqL0 + M 2 /(2) (under the choice of ? in Theorem 1). This is strictly worse than the
p
complexity O( max{L0 , M 2 /(2)} 1/) of our approach. In other words, we have managed to
remove the secondary term in the complexity bound of S-APG. We should emphasize that this strict
improvement is obtained under exactly the same assumptions and with an algorithm as simple (if not
simpler) as S-APG. In some sense it is quite remarkable that the seemingly ?naive? approximation
that pretends the linearity of the proximal map not only can be justified but also leads to a strictly
better result.
Let us further explain how the improvement is possible. As mentioned, S-APG approximates f? with
the smooth function M?A? . This smooth approximation is beneficial if our capability is limited to
smooth functions. Put differently, S-APG implicitly treats applying the fast gradient algorithms as
the ultimate goal. However, the recent advances on nonsmooth optimization have broadened the
range of fast schemes: It is not smoothness but the proximal map that allows fast convergence. Just
as how APG improves upon the subgradient method, our approach, with the ultimate goal to enable
efficient computation of the proximal map, improves upon S-APG. Another lesson we wish to point
out is that unnecessary ?over-smoothing?, as in S-APG, does hurt the performance since it always
increases the Lipschitz constant. To summarize, smoothing is not free and it should be used when
truly needed.
Lastly, we note that our algorithm shares some similarity with forward-backward splitting procedures and alternating direction methods [9, 18, 19], although a detailed examination will not be
given here. Due to space limits, we refer further extensions and improvements to [20, Chapter 3].
6
Experiments
We compare the proposed algorithm with S-APG on two important problems: overlapping group
lasso and graph-guided fused lasso. See Example 1 and Example 2 for details about the nonsmooth
function f?. We note that S-APG has been demonstrated with superior performance on both problems
in [13], therefore we will only concentrate on comparing with it. Bear in mind that the purpose of our
experiment is to verify the theoretical improvement as discussed in Section 5. We are not interested
in fine tuning parameters here (despite its practical importance), thus for a fair comparison, we use
the same desired accuracy , Lipschitz constant L0 and other parameters for all methods. Since both
our method and S-APG have the same per-step complexity, we will simply run them for a maximum
number of iterations (after which saturation is observed) and report all the intermediate objective
values.
7
2? = 2/L0
5
4
3
10
PA?PG
S?PG
PA?APG
S?APG
4
10
3
10
2
10
2
2
10
1
10
1
10
1
10
0
10
0
0
50
100
150
10
PA?PG
S?PG
PA?APG
S?APG
4
10
3
10
10
2? = 0.5/L0
5
10
PA?PG
S?PG
PA?APG
S?APG
10
10
2? = 1/L0
5
10
0
0
50
100
150
10
0
50
100
150
Figure 2: Objective value vs. iteration on overlapping group lasso.
2? = 2/L0
3
2
1
10
PA?PG
S?PG
PA?APG
S?APG
2
10
1
10
0
10
0
0
10
?1
10
?1
0
20
40
60
80
100
10
PA?PG
S?PG
PA?APG
S?APG
2
10
1
10
10
2? = 0.5/L0
3
10
PA?PG
S?PG
PA?APG
S?APG
10
10
2? = 1/L0
3
10
0
?1
20
40
60
80
100
10
0
20
40
60
80
100
Figure 3: Objective value vs. iteration on graph-guided fused lasso.
Overlapping Group Lasso: Following [13] we generate the data as follows: We set `(x) =
1
2
n?d
whose entries are sampled from i.i.d. normal distributions,
2?K kAx ? bk where A ? R
j
xj = (?1) exp(?(j ? 1)/100), and b = Ax + ? with the noise ? sampled from zero mean and unit
variance normal distribution. Finally, the groups in the regularizer f? are defined as
{{1, . . . , 100}, {91, . . . , 190}, . . . , {d ? 99, . . . , d}},
where d = 90K + 10. That is, there are K groups, each containing 100 variables, and the groups
overlap by 10 consecutive variables. We adopt the uniform weight ?k = 1/K and set ? = K/5.
Figure 2 shows the results for n = 5000 and K = 50, with three different accuracy parameters.
For completeness, we also include the results for the non-accelerated versions (PA-PG and S-PG).
Clearly, accelerated algorithms are much faster than their non-accelerated cousins. Observe that our
algorithms (PA-APG and PA-PG) converge consistently faster than S-APG and S-PG, respectively,
with a big margin in the favorable case (middle panel). Again we emphasize that this improvement
is achieved without any overhead.
Graph-guided Fused Lasso: We generate ` similarly as above. Following [13], the graph edges E
are obtained by thresholding the correlation matrix. The case n = 5000, d = 1000, ? = 15 is shown
in Figure 3, under three different desired accuracies. Again, we observe that accelerated algorithms
are faster than non-accelerated versions and our algorithms consistently converge faster.
7
Conclusions
We have considered the composite minimization problem which consists of a smooth loss and a sum
of nonsmooth regularizers. Different from smoothing, we considered a seemingly naive nonsmooth
approximation which simply pretends the linearity of the proximal map. Based on the proximal
average, a new tool from convex analysis, we proved that the new approximation leads to a novel algorithm that strictly improves the state-of-the-art. Experiments on both overlapping group lasso and
graph-guided fused lasso verified the superiority of the proposed method. An interesting question
arose from this work, also under our current investigation, is in what sense certain approximation is
optimal? We also plan to apply our algorithm to other practical problems.
Acknowledgement
The author thanks Bob Williamson and Xinhua Zhang from NICTA?Canberra for their hospitality
during the author?s visit when this work was performed; Warren Hare and Yves Lucet from UBC?
Okanagan for drawing his attention to the proximal average; and the reviewers for their valuable
comments.
8
References
[1] Walter Rudin. Principles of mathematical analysis. McGraw-Hill, 3rd edition, 1976.
[2] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear
inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[3] Yurii Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, Series B, 140:125?161, 2013.
[4] Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Structured sparsity
through convex optimization. Statistical Science, 27(4):450?468, 2012.
[5] Peng Zhao, Guilherme Rocha, and Bin Yu. The composite absolute penalties family for
grouped and hierarchical variable selection. Annals of Statistics, 37(6A):3468?3497, 2009.
[6] Seyoung Kim and Eric P. Xing. Statistical estimation of correlated genome associations to a
quantitative trait network. PLoS Genetics, 5(8):1?18, 2009.
[7] Naum Z. Shor. Minimization Methods for Non-Differentiable Functions. Springer, 1985.
[8] Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103(1):127?152, 2005.
[9] Patrick L. Combettes and Jean-Christophe Pesquet. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages
185?212. Springer, 2011.
[10] Silvia Villa, Saverio Salzo, Luca Baldassarre, and Alessandro Verri. Accelerated and inexact
forward-backward algorithms. SIAM Journal on Optimization, 23(3):1607?1633, 2013.
[11] Heinz H. Bauschke, Rafal Goebel, Yves Lucet, and Xianfu Wang. The proximal average:
Basic theory. SIAM Journal on Optimization, 19(2):766?785, 2008.
[12] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and
smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67:91?108,
2005.
[13] Xi Chen, Qihan Lin, Seyoung Kim, Jaime G. Carbonell, and Eric P. Xing. Smoothing proximal
gradient method for general structured sparse regression. The Annals of Applied Statistics, 6
(2):719?752, 2012.
[14] Ralph Tyrell Rockafellar and Roger J-B Wets. Variational Analysis. Springer, 1998.
[15] Jean J. Moreau. Proximit?e et dualtit?e dans un espace Hilbertien. Bulletin de la Soci?et?e
Math?ematique de France, 93:273?299, 1965.
[16] Ralph Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization, 14(5):877?898, 1976.
[17] Heinz H. Bauschke and Patrick L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 1st edition, 2011.
[18] Hua Ouyang, Niao He, Long Q. Tran, and Alexander Gray. Stochastic alternating direction
method of multipliers. In International Conference on Machine Learning, 2013.
[19] Taiji Suzuki. Dual averaging and proximal gradient descent for online alternating direction
multiplier method. In International Conference on Machine Learning, 2013.
[20] Yaoliang Yu. Fast Gradient Algorithms for Stuctured Sparsity. PhD thesis, University of
Alberta, 2013.
9
| 4934 |@word proceeded:1 middle:1 version:4 polynomial:1 norm:6 calculus:1 semicontinuous:1 tried:1 decomposition:1 pg:18 disappointingly:1 series:2 selecting:1 lucet:2 interestingly:1 kx0:2 current:1 comparing:1 yet:2 must:1 readily:1 numerical:1 cheap:1 remove:1 v:2 rudin:1 amir:1 accordingly:1 xk:1 iterates:1 completeness:1 node:1 bijection:1 math:1 simpler:2 zhang:1 mathematical:3 along:1 become:2 suspicious:1 consists:2 overhead:2 introduce:1 x0:2 peng:1 sacrifice:1 notably:1 indeed:4 roughly:1 examine:1 heinz:2 inspired:1 alberta:2 little:1 pf:6 l20:2 increasing:2 provided:2 linearity:3 moreover:3 panel:2 bounded:2 what:1 argmin:3 unspecified:1 ouyang:1 guarantee:1 quantitative:1 every:1 friendly:1 concave:1 exactly:3 k2:2 control:1 unit:1 broadened:1 superiority:1 before:2 positive:1 engineering:1 treat:1 limit:1 despite:3 id:4 subscript:1 might:1 ease:1 limited:1 range:1 practical:3 unique:1 practice:1 block:1 procedure:3 area:1 significantly:1 composite:6 word:2 cannot:1 convenience:1 selection:2 operator:4 put:1 context:2 applying:2 jaime:1 map:34 demonstrated:1 yt:2 reviewer:1 attention:2 convex:22 survey:1 simplicity:1 splitting:2 immediately:1 legitimate:1 borrow:1 his:1 rocha:1 handle:2 variation:1 justification:2 hurt:1 annals:2 enhanced:1 play:1 ualberta:1 exact:1 programming:2 soci:1 origin:1 pa:22 approximated:1 expensive:3 satisfying:1 taiji:1 observed:1 role:1 wang:1 plo:1 e8:1 knight:1 valuable:1 mentioned:3 alessandro:1 convexity:1 complexity:5 xinhua:1 nesterov:3 dom:2 solving:3 algebra:1 upon:2 f2:7 eric:2 proximit:1 easily:10 differently:2 various:1 chapter:1 regularizer:3 walter:1 fast:8 sc:2 saunders:1 quite:3 whose:4 jean:2 dominating:1 valued:1 say:1 s:1 otherwise:2 drawing:1 statistic:3 withstood:1 hilbertien:1 itself:2 superscript:1 seemingly:2 online:1 differentiable:4 tran:1 product:1 dans:1 relevant:1 loop:1 realization:1 iff:4 convergence:12 requirement:1 converges:2 leave:1 object:1 derive:1 ij:4 received:1 keith:1 strong:1 solves:1 c:1 predicted:1 implies:1 met:1 convention:1 direction:3 guided:7 fij:5 radius:1 concentrate:1 stochastic:1 centered:1 enable:1 bin:1 hx:3 fix:5 f1:9 tyrell:1 investigation:1 proposition:11 tighter:1 summation:1 strictly:7 extension:1 bypassing:1 hold:1 considered:2 normal:2 exp:1 great:1 claim:1 driving:1 early:1 involution:1 xk2:1 consecutive:1 purpose:3 uniqueness:1 adopt:1 favorable:1 estimation:1 applicable:3 wet:1 baldassarre:1 pxj:1 grouped:1 tool:7 minimization:5 clearly:5 hospitality:1 always:3 aim:1 rather:1 arose:1 shrinkage:1 l0:18 ax:1 improvement:5 pif:1 consistently:2 greatly:1 kim:2 sense:3 yaoliang:3 subroutine:1 interested:3 france:1 ralph:2 overall:2 dual:1 denoted:1 favored:1 development:2 plan:1 smoothing:10 art:1 initialize:2 yu:3 espace:1 nonsmooth:21 others:1 fundamentally:1 summand:1 t2:1 few:1 report:1 individual:2 beck:1 ourselves:2 argminx:2 connects:1 envelop:5 suit:1 ab:1 freedom:1 recalling:1 rodolphe:1 truly:1 behind:1 regularizers:4 amenable:1 accurate:1 edge:2 injective:1 necessary:1 iv:3 taylor:1 euclidean:3 re:2 desired:3 theoretical:4 mk:2 instance:1 fenchel:3 modeling:1 teboulle:1 corroborate:1 addressing:1 subset:3 entry:1 uninteresting:1 usefulness:1 uniform:3 conducted:1 bauschke:2 proximal:62 rosset:1 combined:1 convince:1 thanks:1 st:1 international:2 siam:4 michael:1 enhance:1 salzo:1 fused:9 again:5 thesis:1 satisfied:3 containing:1 rafal:1 worse:1 zhao:1 yp:1 de:2 bold:1 summarized:1 rockafellar:2 satisfy:1 explicitly:1 vi:2 piece:1 later:1 performed:1 closed:3 sup:1 red:1 francis:1 xing:2 complicated:5 capability:3 yves:2 accuracy:5 variance:1 efficiently:2 yield:1 correspond:1 lesson:1 bob:1 explain:1 parallelizable:1 definition:6 inexact:1 ss1:3 hare:1 naturally:1 proof:4 sampled:2 proved:1 treatment:1 popular:1 recall:6 improves:3 ubiquitous:1 hilbert:2 carefully:1 jenatton:1 follow:2 methodology:2 improved:1 verri:1 formulation:1 though:2 strongly:2 just:1 roger:1 lastly:3 correlation:1 hand:1 overlapping:7 lack:1 maximizer:1 perhaps:2 gray:1 scientific:1 believe:1 building:1 verify:2 multiplier:2 managed:1 regularization:1 equality:2 alternating:3 during:1 inferior:1 essence:1 coincides:1 bijective:1 hill:1 saharon:1 meaning:2 variational:1 consideration:1 novel:3 fi:1 common:2 superior:1 ji:1 discussed:2 association:1 approximates:2 he:1 trait:1 significant:1 refer:1 goebel:1 smoothness:3 rd:2 tuning:1 fk:25 mathematics:1 similarly:2 nonlinearity:1 entail:1 similarity:1 yk2:4 add:1 patrick:2 recent:7 showed:1 optimizing:1 inf:2 scenario:1 certain:1 inequality:1 christophe:1 yi:3 exploited:1 seen:1 injectivity:1 additional:1 somewhat:2 employed:1 recognized:1 converge:2 conjugating:1 signal:1 ii:3 multiple:1 reduces:2 smooth:18 technical:3 faster:5 af:4 bach:1 long:1 lin:1 luca:1 equally:1 visit:1 bigger:1 kax:1 basic:1 regression:1 essentially:2 iteration:5 achieved:1 justified:2 fine:1 sends:1 extra:1 probably:1 strict:1 induced:1 comment:1 intermediate:1 iii:3 easy:1 niceness:1 xj:5 shor:1 lasso:16 pesquet:1 inner:3 idea:2 reduce:2 computable:1 oneself:1 cousin:1 bottleneck:1 expression:1 ultimate:3 effort:1 penalty:1 speaking:1 constitute:1 remark:2 useful:1 clear:4 detailed:1 tune:1 amount:1 incapability:1 generate:2 sign:2 per:2 tibshirani:1 broadly:1 write:1 shall:1 group:16 key:4 changing:1 verified:2 backward:2 imaging:1 graph:11 subgradient:11 merely:1 monotone:2 sum:4 run:4 inverse:2 everywhere:1 powerful:1 mk2:1 throughout:2 almost:1 family:1 infimal:1 prefer:1 scaling:1 appendix:3 bit:1 bound:2 apg:54 conjugation:1 quadratic:2 precisely:1 x2:1 min:5 department:1 structured:5 popularized:1 combination:2 ball:1 conjugate:3 beneficial:1 slightly:1 surjectivity:1 making:1 maxy:2 agree:1 remains:1 turn:2 needed:5 mind:1 end:3 yurii:2 adopted:1 available:2 incurring:1 apply:1 observe:3 hierarchical:1 generic:2 stepsize:2 slower:1 deserved:1 original:1 existence:2 denotes:1 include:2 cf:1 unsuitable:1 exploit:1 pretend:4 approximating:1 society:1 unchanged:1 objective:7 question:2 usual:1 niao:1 villa:1 said:1 gradient:17 distance:1 nondifferentiable:1 carbonell:1 argue:1 trivial:2 reason:1 nicta:1 pointwise:3 remind:1 illustration:1 balance:1 minimizing:1 unfortunately:2 robert:1 relate:1 gk:4 zt:4 proper:1 pretending:1 contend:1 motivates:1 upper:2 observation:1 convolution:1 benchmark:1 finite:1 descent:1 incorporated:1 y1:1 smoothed:2 arbitrary:1 canada:1 introduced:1 bk:2 required:1 established:1 usually:2 mismatch:1 sparsity:4 challenge:2 summarize:2 saturation:1 max:5 royal:1 suitable:1 overlap:2 difficulty:1 treated:1 natural:1 force:1 critical:1 examination:1 zhu:1 scheme:6 improve:2 julien:1 xg:1 naive:4 nice:2 circled:1 acknowledgement:1 loss:4 bear:1 interesting:1 proven:2 remarkable:1 t6g:1 thresholding:2 principle:1 share:1 balancing:1 course:2 changed:1 genetics:1 surprisingly:2 keeping:1 copy:1 free:1 enjoys:1 side:1 warren:1 fall:1 bulletin:1 correspondingly:1 absolute:1 sparse:1 moreau:8 genome:1 exceedingly:1 concavity:1 forward:2 author:2 suzuki:1 oftentimes:1 approximate:5 emphasize:2 implicitly:1 mcgraw:1 dealing:1 monotonicity:1 mairal:1 conclude:1 unnecessary:1 xi:4 continuous:8 iterative:3 un:1 guilherme:1 superficial:1 ca:1 expansion:1 williamson:1 domain:1 marc:1 pk:4 main:3 big:1 noise:1 arise:1 edition:2 silvia:1 fair:1 ista:1 ematique:1 canberra:1 edmonton:1 depicts:1 strengthened:2 slow:4 combettes:2 purport:1 explicit:2 wish:1 theorem:3 formula:2 down:1 specific:1 xt:7 x:2 admits:1 naively:1 albeit:1 importance:1 phd:1 justifies:1 kx:4 margin:1 chen:1 easier:1 mf:6 simply:7 absorbed:1 conveniently:1 contained:1 hua:1 springer:4 ubc:1 satisfies:1 relies:1 obozinski:1 abbreviation:1 goal:3 identity:2 seyoung:2 careful:1 lipschitz:16 hard:1 fista:1 specifically:1 tyrrell:1 operates:1 justify:2 averaging:1 secondary:1 duality:3 la:1 meaningful:1 exception:1 formally:2 guillaume:1 alexander:1 accelerated:10 evaluate:1 correlated:1 |
4,348 | 4,935 | Polar Operators for Structured Sparse Estimation
Xinhua Zhang
Machine Learning Research Group
National ICT Australia and ANU
[email protected]
Yaoliang Yu and Dale Schuurmans
Department of Computing Science, University of Alberta
Edmonton, Alberta T6G 2E8, Canada
{yaoliang,dale}@cs.ualberta.ca
Abstract
Structured sparse estimation has become an important technique in many areas of
data analysis. Unfortunately, these estimators normally create computational difficulties that entail sophisticated algorithms. Our first contribution is to uncover a
rich class of structured sparse regularizers whose polar operator can be evaluated
efficiently. With such an operator, a simple conditional gradient method can then
be developed that, when combined with smoothing and local optimization, significantly reduces training time vs. the state of the art. We also demonstrate a new
reduction of polar to proximal maps that enables more efficient latent fused lasso.
1
Introduction
Sparsity is an important concept in high-dimensional statistics [1] and signal processing [2] that has
led to important application successes by reducing model complexity and improving interpretability
of the results. Standard computational strategies such as greedy feature selection [3] and generic
convex optimization [4?7] can be used to implement simple sparse estimators. However, sophisticated notions of structured sparsity have been recently developed that can encode combinatorial
patterns over variable subsets [8]. Although combinatorial structure greatly enhances modeling capability, it also creates computational challenges that require sophisticated optimization approaches.
For example, current structured sparse estimators often adopt an accelerated proximal gradient
(APG) strategy [9, 10], which has a low per-step complexity and enjoys an optimal convergence
rate among black-box first-order procedures [10]. Unfortunately, APG must also compute a proximal update (PU) of the nonsmooth regularizer during each iteration. Not only does the PU require a
highly nontrivial computation for structured regularizers [4]?e.g., requiring tailored network flow
algorithms in existing cases [5, 11, 12]?it yields dense intermediate iterates. Recently, [6] has
demonstrated a class of regularizers where the corresponding PUs can be computed by a sequence
of submodular function minimizations, but such an approach remains expensive.
Instead, in this paper, we demonstrate that an alternative approach can be more effective for many
structured regularizers. We base our development on the generalized conditional gradient (GCG)
algorithm [13, 14], which also demonstrates promise for sparse model optimization. Although GCG
possesses a slower convergence rate than APG, it demonstrates competitive performance if its updates are interleaved with local optimization [14?16]. Moreover, GCG produces sparse intermediate
iterates, which allows additional sparsity control. Importantly, unlike APG, GCG requires computing the polar of the regularizer, instead of the PU, in each step. This difference allows important
new approaches for characterizing and evaluating structured sparse regularizers.
Our first main contribution is to characterize a rich class of structured sparse regularizers that allow
efficient computation of their polar operator. In particular, motivated by [6], we consider a family
of structured sparse regularizers induced by a cost function on variable subsets. By introducing a
?lifting? construction, we show how these regularizers can be expressed as linear functions, which
after some reformulation, allows efficient evaluation by a simple linear program (LP). Important
examples covered include overlapping group lasso [5] and path regularization in directed acyclic
graphs [12]. By exploiting additional structure in these cases, the LP can be reduced to a piecewise
1
linear objective over a simple domain, allowing further reduction in computation time via smoothing
[17]. For example, for the overlapping group lasso with n groups where each variable belongs
? to at
most r groups, the cost of evaluating the polar operator can be reduced from O(rn3 ) to O(rn n/)
for a desired accuracy of . Encouraged by the superior performance of GCG in these cases, we
then provide a simple reduction of the polar operator to the PU. This reduction makes it possible to
extend GCG to cases where the PU is easy to compute. To illustrate the usefulness of this reduction
we provide an efficient new algorithm for solving the fused latent lasso [18].
2
Structured Sparse Models
Consider the standard regularized risk minimization framework
minn f (w) + ? ?(w),
(1)
w?R
where f is the empirical risk, assumed to be convex with a Lipschitz continuous gradient, and ? is
a convex, positively homogeneous regularizer, i.e. a gauge [19, ?4]. Let 2[n] denote the power set of
[n] := {1, . . . , n}, and let R+ := R+ ? {?}. Recently, [6] has established a principled method for
deriving regularizers from a subset cost function F : 2[n] ? R+ based on defining the gauge:
p
?F (w) = inf{? ? 0 : w ? ? conv(SF )}, where SF = wA : kwA kp? = 1/F (A), ? =
6 A ? [n] . (2)
Here ? is a scalar, conv(SF ) denotes the convex hull of the set SF , p?, p ? 1 with p1? + p1 = 1, k ? kp
throughout is the usual `p -norm, and wA denotes a duplicate of w with all coordinates not in A set
to 0. Note that we have tacitly assumed F (A) = 0 iff A = ? in (2). The gauge ?F defined in (2)
is also known as the atomic norm with the set of atoms SF [20]. It will be useful to recall that the
polar of a gauge ? is defined by [19, ?15]:
?? (g) := supw {hg, wi : ?(w) ? 1}.
(3)
In particular, the polar of a norm is its dual norm. (Recall that any norm is also a gauge.) For the
specific gauge ?F defined in (2), its polar is simply the support function of SF [19, Theorem 13.2]:
??F (g) = max hg, wi = max kgA kp /[F (A)]1/p .
w?SF
(4)
?6=A?[n]
(The first equality uses the definition of support function, and the second follows from (2).) By varying p? and F , one can generate a class of sparsity inducing regularizers that includes most current
proposals [6]. For instance, if F (A) = 1 whenever |A| (the cardinality of A) is 1, and F (A) = ?
for |A| > 1, then ??F is the `? norm and ?F is the usual `1 norm. More importantly, one can encode
structural information through the cost function F , which selects and establishes preferences over
the set of atoms SF . As pointed out in [6], when F is submodular, (4) can be evaluated by a secant method with submodular minimizations ([21, ?8.4], see also Appendix B). However, as we will
show, it is possible to do significantly better by completely avoiding submodular optimization. Before presenting our main results, we first review the state of the art for solving (1), and demonstrate
how the performance of current methods can hinge on efficient computation of (4).
2.1
Optimization Algorithms
A standard approach for minimizing (1) is the accelerated proximal gradient (APG) algorithm [9,
10], where each iteration involves solving the proximal update (PU): wk+1 = arg minw hdk , wi +
1
2
and descent direction dk . Although it can be
2sk kw ? wk k2 + ??F (w), for some step size sk ?
shown that APG finds an accurate solution in O(1/ ) iterations [9, 10], each update can be quite
difficult to compute when ?F encodes combinatorial structure, as noted in the introduction.
An alternative approach to solving (1) is the generalized conditional gradient (GCG) method [13,
14], which has recently received renewed attention. Unlike APG, GCG only requires the polar
operator of the regularizer ?F to be computed in each iteration, given by the argument of (4):
P?F (g) = arg max hg, wi = F (C)
w?SF
?1
p
p
arg max hgC , wi for C = arg max kgA kp /F (A). (5)
w:kwkp?=1
?6=A?[n]
Algorithm 1 outlines a GCG procedure for solving (1) that only requires the evaluation of P?F in
each iteration without needing the full PU to be computed. The algorithm is quite simple: Line 3
2
Algorithm 1 Generalized conditional gradient (GCG) for optimizing (1).
1: Initialize w0 ? 0, s0 ? 0, `0 ? 0.
2: for k = 0, 1, . . . do
3:
Polar operator: vk ? P?F (gk ), Ak ? C(gk ), where gk = ??f (wk ) and C is defined in (5).
4:
2-D Conic search: (?, ?) := arg min??0,??0 f (?wk + ?vk ) + ?(?sk + ?).
P
P
1
5:
Local re-optimization: {ui }k1 := arg min{ui =uiA } f ( i ui ) + ? i F (Ai ) p kui kp?
i
6:
wk+1
7: end for
where the {ui } are initialized by ui = ?`i for i < k and ui = ?vi for i = k.
P i
P
1
? i u , `i ? ui for i ? k, sk+1 ? i F (Ai ) p kui kp?.
evaluates the polar operator, which provides a descent direction vk ; Line 4 finds the optimal step
sizes for combining the current iterate wk with the direction vk ; and Line 5 locally improves the
objective (1) by maintaining the same support patterns but re-optimizing the parameters. It has been
shown that GCG can find an accurate solution to (1) in O(1/) steps, provided only that the polar
(5) is computed to accuracy [14]. Although GCG has a slower theoretical convergence rate than
APG, the introduction of local optimization (Line 5) often yields faster convergence in practice [14?
16]. Importantly, Line 5 does not increase the sparsity of the intermediate iterates. Our main goal
in this paper therefore is to extend this GCG approach to structured sparse models by developing
efficient algorithms for computing the polar operator for the structured regularizers defined in (2).
3
Polar Operators for Atomic Norms
Let 1 denote the vector of all 1s with length determined by context. Our first main contribution is
to develop a general class of atomic norm regularizers whose polar operator (5) can be computed
efficiently. To begin, consider the case of a (partially) linear function F where there exists a c ?
Rn such that F (A) = hc, 1A i for all A ? dom F (note that the domain need not be a lattice).
A few useful regularizers can be generated by linear functions: for example, the `1 norm can be
derived from F (A) = h1, 1A i for |A| = 1, which is linear. Unfortunately, linearity is too restrictive
to capture most structured regularizers of interest, therefore we will need to expand the space of
functions F we consider. To do so, we introduce the more general class of marginalized linear
functions: we say that F is marginalized linear if there exists a nonnegative linear function M on an
extended domain 2[n+l] such that its marginalization to 2[n] is exactly F :
F (A) =
min
M (B), ? A ? [n].
(6)
B:A?B?[n+l]
Essentially, such a function F is ?lifted? to a larger domain where it becomes linear. The key
question is whether the polar ??F can be efficiently evaluated for such functions.
To develop an efficient procedure for computing the polar ??F , first consider the simpler case of
computing the polar ??M for a nonnegative linear function M . Note that by linearity the function M
n+l
can be expressed as M (B) = hb, 1B i for B ? dom M ? 2[n+l] (b ? R+
). Since the effective
domain of M need not be the whole space in general, we make use of the specialized polytope:
P := conv{1B : B ? dom M } ? [0, 1]n+l .
(7)
Note P may have exponentially many faces. From the definition (4) one can then re-express the
polar ??M as:
1/p
h?
g, wi
??M (g) =
max
kgB kp /M (B)1/p =
max
where g?i = |gi |p ?i,
(8)
06=w?P hb, wi
?6=B?dom M
where we have used the fact that the linear-fractional objective must attain its maximum at vertices of
P ; that is, at 1B for some B ? dom M . Although the linear-fractional program (8) can be reduced to
a sequence of LPs using the classical method of [22], a single LP suffices for our purposes. Indeed,
let us first remove the constraint w 6= 0 by considering the alternative polytope:
Q := P ? {w ? Rn+l : h1, wi ? 1}.
(9)
As shown in Appendix A, all vertices of Q are scalar multiples of the nonzero vertices of P . Since
the objective in (8) is scale invariant, we can restrict the constraints to w ? Q. Then, by applying
? = w/ hb, wi, ? = 1/ hb, wi, problem (8) can be equivalently re-expressed by:
transformations w
? , subject to w
? ? ?Q, hb, wi
? = 1.
max h?
g, wi
(10)
?
w,?>0
3
Of course, whether this LP can be solved efficiently depends on the structure of Q (and of P indeed).
Finally, we note that the same formulation allows the polar to be efficiently computed for a marginalized linear function F via a simple reduction: Consider any g ? Rn and let [g; 0] ? Rn+l denote g
padded by l zeros. Then ??F (g) = ??M ([g; 0]) for all g ? Rn because
p
max
kgA kp
?6=A?[n]
F (A)
p
= max
?6=A?[n]
p
kgA kp
minB:A?B?[n+l] M (B)
= max
?6=A?B
kgA kp
M (B)
p
=
max
k[g; 0]B kp
B:?6=B?[n+l]
M (B)
. (11)
To see the last equality, fixing B the optimal A is attained at A = B ? [n]. If B ? [n] is empty, then
k[g; 0]B k = 0 and the corresponding B cannot be the maximizer of the last term, unless ??F (g) = 0
in which case it is easy to see ??M ([g; 0]) = 0.
Although we have kept our development general so far, the idea is clear: once an appropriate ?lifting?
has been found so that the polytope Q in (9) can be compactly represented, the polar (5) can be
reformulated as the LP (10), for which efficient implementations can be sought. We now demonstrate
this new methodology for the two important structured regularizers: group sparsity and path coding.
3.1
Group Sparsity
For a general formulation of group sparsity, let G ? 2[n] be a set of variable groups (subsets) that
possibly overlap [3, 6, 7]. Here we use i ? [n] to index variables and G ? G to index groups.
Consider the cost function over variable groups Fg : 2[n] ? R+ defined by:
X
Fg (A) =
cG I(A ? G 6= ?),
(12)
G?G
where cG is a nonnegative cost and I is an indicator such that I(?) = 1 if its argument is true, and
0 otherwise. The value Fg (A) provides a weighted count of how many groups overlap with A.
Unfortunately, Fg is not linear, so we need to re-express it to recover an efficient polar operator. To
do so, augment the domain by adding l = |G| variables such that each new variable G corresponds
to a group G. Then define a weight vector b ? Rn+l
such that bi = 0 for i ? n and bG = cG for
+
n < G ? n + l. Finally, consider the linear cost function Mg : 2[n+l] ? R+ defined by:
Mg (B) = hb, 1B i if i ? B ? G ? B, ? i ? G ? G; Mg (B) = ? otherwise.
(13)
The constraint ensures that if a variable i ? n appears in the set B, then every variable G corresponding to a group G that contains i must also appear in B. By construction, Mg is a nonnegative
linear function. It is also easy to verify that Fg satisfies (6) with respect to Mg .
To compute the corresponding polar, observe that the effective domain of Mg is a lattice, hence (4)
can be solved by combinatorial methods. However, we can do better by exploiting problem structure
in the LP. For example, observe that the polytope (7) can now be compactly represented as:
Pg = {w ? Rn+l : 0 ? w ? 1, wi ? wG , ? i ? G ? G}.
(14)
Indeed, it is easy to verify that the integral vectors in Pg are precisely {1B : B ? dom Mg }.
Moreover, the linear constraint in (14) is totally unimodular (TUM) since it is the incidence matrix
of a bipartite graph (variables and groups), hence Pg is the convex hull of its integral vectors [23].
? in this case, the LP
Using the fact that the scalar ? in (10) admits a closed form solution ? = h1, wi
(10) can be reduced to:
X
X
? ? 0,
max
g?i min w
?G , subject to w
bG w
?G = 1.
(15)
?
w
i?[n]
G:i?G?G
G?G
Note only {w
?G } appear in the problem as implicitly w
?i = minG:i?G w
?G , ? i ? [n]. This is now
just a piecewise linear objective over a (reweighted) simplex. Since projecting to a simplex can
be performed in linear time, the smoothing method of [17] can be used to obtain a very efficient
implementation. We illustrate a particular case where each variable i ? [n] belongs to at most r > 1
groups. (Appendix D considers when the groups form a directed acyclic graph.)
? denote the negated objective of (15). Then for any > 0, h (w)
? :=
Proposition 1 Let h(w)
P
P
2
n
?n?
gi w
?G /
satisfies: (i) the gradient of h is k?
gk? log r -Lipschitz,
i?[n] log
G:i?G r
n log r
? ? h (w)
? ? (?, 0] for all w,
? and (iii) the gradient of h can be computed in O(nr) time.
(ii) h(w)
4
(The proof is given in Appendix C.)?
With this construction, APG can be run on h to achieve
a 2
?
accurate solution to (15) within O( 1 n log r) steps [17], using a total time cost of O( nr
n
log
r).
Note that this is significantly cheaper than the O(n2 (l + n)r) worst case complexity of [11, Algorithm 2]. More importantly, we gain explicit control of the trade-off between accuracy and
computational cost. A detailed comparison to related approaches is given in Appendix B.1 and E.
3.2
Path Coding
Another interesting regularizer, recently investigated by [12], is determined by path costs in a directed acyclic graph (DAG) defined over the set of variables i ? [n]. For convenience, we add two
nodes, a source s and a sink t, with dummy edges (s, i) and (i, t) for all i ? [n]. An (s, t)-path (or
simply path) is then given by a sequence (s, i1 ), (i1 , i2 ), . . . , (ik?1 , ik ), (ik , t) with k ? 1. A nonnegative cost is associated with each edge including (s, i) and (i, t), so the cost of a path is the sum
of its edge costs. A regularizer can then be defined by (2) applied to the cost function Fp : 2[n] ? R+
cost of the path if the nodes in A form an (s, t)-path (unique for DAG)
.
(16)
Fp (A) =
?
if such a path does not exist
Note Fp is not submodular. Although Fp is not linear, a similar ?lifting? construction can be used to
show that it is marginalized linear, hence it supports efficient computation of the polar. To explain
the construction, let V := [n] ? {s, t} be the node set including s and t, E be the edge set including
|T |
(s, i) and (i, t), T = V ? E, and let b ? R+ be the concatenation of zeros for node costs and the
given edge costs. Let m := |E| be the number of edges. It is then easy to verify that Fp satisfies (6)
with respect to the linear cost function Mp : 2T ? R+ defined by:
Mp (B) = hb, 1B i if B represents a path; ? otherwise.
(17)
p
To efficiently compute the resulting polar, we consider the form (8) using g?i = |gi | ?i as before:
X
X
h?
g, wi
??Mp (g) =
max
, s.t. wi =
wij =
wki , ?i ? [n]. (18)
j:(i,j)?E
k:(k,i)?E
06=w?[0,1]|T | hb, wi
Here the constraints form the well-known flow polytope whose vertices are exactly all the paths in a
DAG. Similar to (15), the normalized LP
! (10) can be simplified by solving for the scalar ? to obtain:
X
X
X
X
X
? = 1,
max
g?i
w
?ij +
w
?ki , s.t. hb, wi
w
?ij =
w
?ki , ?i ? [n]. (19)
?
w?0
i?[n]
j:(i,j)?E
k:(k,i)?E
j:(i,j)?E
k:(k,i)?E
Due to the extra constraints, the LP (19) is more complicated than (15) obtained for group sparsity. Nevertheless, after some reformulation (essentially dualization), (19) can still be converted to
a simple piecewise linear objective, hence it is amenable to smoothing; see Appendix F for details.
To find a 2 accurate solution, the cutting plane method takes O( mn
) computations to optimize the
? 2
nonsmooth piecewise linear objective, while APG needs O( 1 n) steps to optimize the smoothed
?
objective, using a total time cost of O( m
n). This too is faster than the O(nm) worst case com
plexity of [12, Appendix D.5] in the regime where n is large and the desired accuracy is moderate.
4
Generalizing Beyond Atomic Norms
Although we find the above approach to be effective, many useful regularizers are not expressed in
form of an atomic norm (2), which makes evaluation of the polar a challenge and thus creates difficulty in applying Algorithm 1. For example, another important class of structured sparse regularizers
is given by an alternative, composite gauge construction:
X
?s (w) =
?i (w), where ?i is a closed gauge that can be different for different i.
(20)
i
P i
?
?
i
The polar for such a regularizer is given by ?s (g) = inf{maxi ?i (w ) : i w = g}, where each
wi is an independent vector and ??i corresponds to the polar of ?i (proof given in Appendix H).
Unfortunately, a polar in this form does not appear to be easy to compute. However, for some
regularizers in the form (20) the following proximal objective can indeed be computed efficiently:
ArgProx? (g) = arg min? 12 kg ? ?k22 + ?(?). (21)
Prox? (g) = min? 12 kg ? ?k22 + ?(?),
?
The key observation is that computing ? can be efficiently reduced to just computing Prox? .
Proposition 2 For any closed gauge ?, its polar ?? can be equivalently expressed by:
?? (g) = inf{ ? ? 0 : Prox?? (g) = 12 kgk22 }.
(22)
5
(The proof is included in Appendix I.) Since the left hand side of the inner constraint is decreasing in
?, one can efficiently compute the polar ?? by a simple root finding search in ?. Thus, regularizers in
the form of (20) can still be accommodated in an efficient GCG method in the form of Algorithm 1.
4.1
Latent Fused Lasso
To demonstrate the usefulness of this reduction we consider the recently proposed latent fused lasso
model [18], where for given data X ? Rm?n one seeks a dictionary matrix W ? Rm?t and
coefficient matrix U ? Rt?n that allow X to be accurately reconstructed from a dictionary that has
desired structure. In particular, for a reconstruction loss f , the problem is specified by:
X
min f (W U, X) + ?p (W ), where ?p (W ) =
?1 kW:i kp + ?2 kW:i kTV ,
(23)
W,U ?U
i
Pm?1
such that k ? kTV is given by kwkTV = j=1 |wj+1 ? wj | and k ? kp is the usual `p -norm. The
fused lasso [24] corresponds to p = 1. Note that U is constrained to be in a compact set U to avoid
degeneracy. To ease notation, we assume w.l.o.g. ?1 = ?2 = 1.
The main motivation for this regularizer arises from biostatistics, where one wishes to identify DNA
copy number variations simultaneously for a group of related samples [18]. In this case the total
variation norm k ? kTV encourages the dictionary to vary smoothly from entry to entry while the `p
norm shrinks the dictionary so that few latent features are selected. Conveniently, ?p decomposes
along the columns of W , so one can apply the reduction in Proposition 2 to compute its polar assuming Prox?p can be efficiently computed. Solving Prox?p appears non-trivial due to the composition
of two overlapping norms, however [25] showed that for p = 1 the polar can be solved efficiently
by computing Prox for each of the two norms successively. Here we extend this results by proving
in Appendix J that the same fact holds for any `p norm.
Proposition 3 For any 1 ? p ? ?, ArgProxk?kTV +k?kp (w) = ArgProxk?kp ArgProxk?kTV (w) .
Since Proxk?kp is easy to compute, the only remaining problem is to develop an efficient algorithm
for computing Proxk?kTV . Although [26] has recently proposed an approximate iterative method, we
provide an algorithm in Appendix K that is able to efficiently compute the exact solution. Therefore,
by combining this result with Propositions 2 and 3 we are able to efficiently compute the polar ??p
and hence apply Algorithm 1 to solving (23) with respect to W .
5
Experiments
To investigate the effectiveness of these computational schemes we considered three applications:
group lasso, path coding, and latent fused lasso. All algorithms were implemented in Matlab unless
otherwise noted.
5.1
Group Lasso: CUR-like Matrix Factorization
Our first experiment considered an example of group lasso that is inspired by CUR matrix factorization [27]. Given a data matrix X ? Rn?d , the goal is to compute an approximate factorization
X ? CU R, such that C contains a subset of c columns from X and R contains a subset of r rows
from X. Mairal et al. [11, ?5.3] proposed a convex relaxation of this problem:
P
P
2
minW 12 kX ?XW Xk + ?
(24)
i kWi: k? +
j kW:j k? .
Conveniently, the regularizer fits the development of Section 3.1, with p = 1 and the groups defined
to be the rows and columns of W . To evaluate different methods, we used four gene-expression data
sets [28]: SRBCT, Brain Tumor 2, 9 Tumor, and Leukemia2, of sizes 83 ? 2308, 50 ? 10367,
60 ? 5762, and 72 ? 11225, respectively. The data matrices were first centered columnwise and then
rescaled to have unit Frobenius norm.
Algorithms. We compared three algorithms: GCG (Algorithm 1) with our polar operator which we
call GCG TUM, GCG with the polar operator of [11, Algorithm 2] (GCG Secant), and APG (see
Section 2.1). The PU in APG uses the routine mexProximalGraph from the SPAMS package [29].
The polar operator of GCG Secant was implemented with a mex wrapper of a max-flow package
?
[30], while GCG TUM used L-BFGS to find an optimal solution {wG
} for the smoothed version of
6
0.2
0.15
0.1
1
2
3
4
10
10
10
CPU time (seconds)
0.07
0.06
0.05
0.04
0.03
Objective function value
Objective function value
0.07
0.06
0.05
0.04
3
10
10
10
CPU time (seconds)
(c) 9 Tumor
3
4
10
4
0.1
0.09
0.08
0.07
0.06
0.05
1
2
1.3
1.2
1.1
1
?1
10
10
0
1
10
10
CPU time (seconds)
(a) Obj vs CPU time (? = 10?2 )
(b) Brain Tumor 2
0.08
2
2
10
10
10
CPU time (seconds)
10
(a) SRBCT
1
1
Objective function value
0.05
0.08
Objective function value
Objective function value
Objective function value
0.25
3
10
10
10
CPU time (seconds)
(d) Leukemia2
Figure 1: Convex CUR matrix factorization results.
1
0.8
0.6
0.4
0.2 ?1
10
0
1
10
10
CPU time (seconds)
2
10
(b) Obj vs CPU time (? = 10?3 )
Figure 2: Path coding results.
(15) given in Proposition 1, with smoothing parameter set to 10?3 . To recover an integral solution
it suffices to find an optimal solution to (15) that has the form wG = c for some groups and wG = 0
?
} and set the wG of the smallest k
for the remainder (such a solution must exist). So we sorted {wG
groups to 0, and wG for the remaining groups set to a common value that satisfies the constraint. The
best k can be recovered from {0, 1, . . . , |G| ? 1} in O(nr) time. See more details in Appendix G.
Both GCG methods relinquish local optimization (step 5) in Algorithm 1, but use a totally corrective
variant of step 4, which allows efficient optimization by L-BFGS-B via pre-computing XP?Fg (gk )X.
Results. For simplicity, we tested three values for ?: 10?3 , 10?4 , and 10?5 , which led to increasingly dense solutions. Due to space limitations we only show in Figure 1 the results for ? = 10?4
which gives moderately sparse solutions. On these data sets, GCG TUM proves to be an order of
magnitude faster than GCG Secant in computing the polar. As [11] observes, network flow based
algorithms often find solutions in practice far more quickly than their theoretical bounds. Thanks
to the efficiency of totally corrective update, almost all computations taken by GCG Secant were
devoted to the polar operator. Therefore the acceleration proffered by GCG TUM in computing the
polar leads to a reduction of overall optimization time by at least 50%. Finally, APG is always even
slower than GCG Secant by an order of magnitude, with PU taking up the most computation.
5.2
Path Coding
Following [12, ?4.3], we consider a logistic regression problem where one is given training examples
xi ? Rn with corresponding labels yi ? {?1, 1}. For this problem, we formulate (1) with a path
coding regularizer ?Fp and the empirical risk:
P
(25)
f (w) = i n1i log(1 + exp(?yi hw, xi i)),
where ni is the number of examples that share the same label as yi . We used the breast cancer data
set for this experiment, which consists of 8141 genes and 295 tumors [31]. The gene network is
adopted from [32]. Similar to [12, ?4.3], we removed all isolated genes (nodes) to which no edge is
incident, randomly oriented the raw edges, and removed cycles to form a DAG using the function
mexRemoveCyclesGraph in SPAMS. This resulted in 34864 edges and n = 7910 nodes.
Algorithms. We again considered three methods: APG, GCG with our polar operator (GCG TUM),
and GCG with the polar operator from [12, Algorithm 1], which we label as GCG Secant. The PU
in APG uses the routine mexProximalPathCoding from SPAMS, which solves a quadratic network
flow problem. It turns out the time cost for a single call of the PU was enough for GCG TUM and
7
GCG Secant to converge to a final solution, and so the APG result is not included in our plots. We
implemented the polar operator for GCG Secant based on Matlab?s built-in shortest path routine
graphshortestpath (C++ wrapped by mex). For GCG TUM, we used cutting plane to solve a variant of the dual of (19) (see Appendix F), which is much simipler than smoothing in implementation,
but exhibits similar efficiency in practice. An integral solution can also be naturally recovered in the
course of computing the objective. Again, both GCG methods only used totally corrective updates.
Results. Figure 2 shows the result for path coding, with the regularization coefficient ? set to 10?2
and 10?3 so that the solution is moderately sparse. Again it is clear that GCG TUM is an order of
magnitude faster than GCG Secant.
5.3
Latent Fused Lasso
Finally, we compared GCG and APG on the latent fused lasso problem (23). Two algorithms were
tested as the PU in APG: our proposed method and the algorithm in [26], which we label as APGLiu. The synthetic data is generated by following [18]. For each basis (column) of the dictionary,
? ij = PSj cs I(is ? i ? is + ls ), where Sj ? {3, 5, 8, 10} specifies the
we use the model W
s=1
number of consecutive blocks in the j-th basis, cs ? {?1, ?2, ?3, ?4, ?5}, is ? {1, . . . , m ? 10}
and ls ? {5, 10, 15, 20}, which are the magnitude, starting position, and length of the s-th block,
respectively. Note that we choose cs , is , ls randomly (and independently for each block s) from
? are sampled from the Gaussian distribution N (0, 1)
their respective sets. The coefficient matrix U
(independently for each entry) and normalized to have unit `2 norm for each row. Finally, we
?U
? + ?, with added (zero mean and unit variance) Gaussian
generate the observation matrix X = W
noise ?. We set the dimension m = 300, the number of samples n = 200, and the number of bases
(latent dimension) t? = 10.
Since the noise is Gaussian, we choose the squared loss f (W U, X) = 21 kX ? W U k2F , but the algorithm is applicable to any other smooth loss as well. To avoid degeneracy, we constrained each row
of U to have unit `2 norm. Finally, to pick an appropriate dictionary size, we tried t ? {5, 10, 20},
which corresponds to under-, perfect- and over-estimation, respectively. The regularization constants ?1 , ?2 in ?p were chosen from {0.01, 0.1, 1, 10, 100}.
4
6.4
6
APG, p=1
GCG, p=1
APG?Liu, p=1
APG, p=2
GCG, p=2
APG?Liu,p=2
6.2
6
Loss + Reg
Note that problem (23) is not jointly convex in W and
U , so we followed the same strategy as [18]; that is,
we alternatively optimized W and U keeping the other
fixed. For each subproblem, we ran both APG and
GCG to compare their performance. For space limitations, we only report the running time for the setting
?1 = ?2 = 0.1, t = 20 and p ? {1, 2}. In these
experiments we observed that the polar typically only
requires 5 to 6 calls to Prox. As can be seen from Figure 3, GCG is significantly faster than APG and APGLiu in reducing the objective. This is due to the greedy
nature of GCG, which yields very sparse iterates, and
when interleaved with local search achieves fast convergence.
x 10
5.8
5.6
5.4
5.2
0
20
40
60
CPU time (sec)
80
100
Figure 3: Latent fused lasso.
Conclusion
We have identified and investigated a new class of structured sparse regularizers whose polar can
be reformulated as a linear program with totally unimodular constraints. By leveraging smoothing
techniques, we are able to compute the corresponding polars with significantly better efficiency than
previous approaches. When plugged into the GCG algorithm, one can observe significant reductions
in run time for both group lasso and path coding regularization. We have further developed a generic
scheme for converting an efficient proximal solver to an efficient method for computing the polar
operator. This reduction allowed us to develop a fast new method for latent fused lasso. For future
work, we plan to study more general subset cost functions and investigate new structured regularizers
amenable to our approach. It will also be interesting to extend GCG to handle nonsmooth losses.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
P. B?uhlmann and S. van de Geer. Statistics for High-Dimensional Data. Springer, 2011.
Y. Eldar and G. Kutyniok, editors. Compressed Sensing: Theory and Applications. Cambridge, 2012.
J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. JMLR, 12:3371?3412, 2011.
S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In ICML,
2010.
R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding.
JMLR, 12:2297?2334, 2011.
G. Obozinski and F. Bach. Convex relaxation for combinatorial penalties. Technical Report HAL
00694765, 2012.
P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical
variable selection. Annals of Statistics, 37(6A):3468?3497, 2009.
F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, 2012.
A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140:
125?161, 2013.
J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Convex and network flow optimization for structured
sparsity. JMLR, 12:2681?2720, 2011.
J. Mairal and B. Yu. Supervised feature selection in graphs with path coding penalties and network flows.
JMLR, 14:2449?2485, 2013.
M. Dudik, Z. Harchaoui, and J. Malick. Lifted coordinate descent for learning with trace-norm regularizations. In AISTATS, 2012.
X. Zhang, Y. Yu, and D. Schuurmans. Accelerated training for matrix-norm regularization: A boosting
approach. In NIPS, 2012.
S. Laue. A hybrid algorithm for convex semidefinite optimization. In ICML, 2012.
B. Mishra, G. Meyer, F. Bach, and R. Sepulchre. Low-rank optimization with trace norm penalty. Technical report, 2011. http://arxiv.org/abs/1112.2318.
Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
G. Nowak, T. Hastie, J. R. Pollack, and R. Tibshirani. A fused lasso latent feature model for analyzing
multi-sample aCGH data. Biostatistics, 12(4):776?791, 2011.
R. T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse
problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
F. Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical Report HAL
00527714, 2010.
W. Dinkelbach. On nonlinear fractional programming. Management Science, 13(7), 1967.
A. Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, 1st edition, 1986.
R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso.
Journal of the Royal Statistical Society: Series B, 67:91?108, 2005.
J. Friedman, T. Hastie, H. H?ofling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of
Applied Statistics, 1(2):302?332, 2007.
J. Liu, L. Yuan, and J. Ye. An efficient algorithm for a class of fused lasso problems. In Conference on
Knowledge Discovery and Data Mining, 2010.
M. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the
National Academy of Sciences, 106(3):697?702, 2009.
URL http://www.gems-system.or.
URL http://spams-devel.gforge.inria.fr.
URL http://drwn.anu.edu.au/index.html.
M. Van De Vijver et al. A gene-expression signature as a predictor of survival in breast cancer. The New
England Journal of Medicine, 347(25):1999?2009, 2002.
H. Chuang, E. Lee, Y. Liu, D. Lee, and T. Ideker. Network-based classification of breast cancer metastasis.
Molecular Systems Biology, 3(140), 2007.
9
| 4935 |@word cu:1 version:1 norm:24 seek:1 tried:1 decomposition:1 pg:3 pick:1 sepulchre:1 reduction:11 wrapper:1 liu:4 contains:3 series:1 ktv:6 renewed:1 psj:1 existing:1 mishra:1 current:4 com:1 incidence:1 recovered:2 must:4 john:1 enables:1 remove:1 plot:1 update:6 v:3 greedy:2 selected:1 plane:2 xk:1 iterates:4 provides:2 node:6 boosting:1 preference:1 org:1 simpler:1 zhang:4 mathematical:2 along:1 become:1 ik:3 yuan:1 consists:1 introduce:1 indeed:4 p1:2 multi:2 brain:2 inspired:1 ming:1 decreasing:1 alberta:2 cpu:9 cardinality:1 considering:1 conv:3 provided:1 begin:1 moreover:2 linearity:2 totally:5 wki:1 notation:1 biostatistics:2 becomes:1 kg:2 developed:3 finding:1 transformation:1 every:1 unimodular:2 ofling:1 kutyniok:1 exactly:2 demonstrates:2 k2:1 rm:2 control:2 normally:1 unit:4 appear:3 before:2 local:6 ak:1 analyzing:1 path:20 solver:1 black:1 inria:1 au:2 ease:1 factorization:4 bi:1 directed:3 unique:1 atomic:5 practice:3 block:3 implement:1 procedure:3 secant:10 area:1 empirical:2 significantly:5 attain:1 composite:3 pre:1 cannot:1 convenience:1 selection:3 operator:21 risk:3 context:1 applying:2 optimize:2 www:1 map:1 demonstrated:1 attention:1 starting:1 l:3 convex:14 independently:2 formulate:1 simplicity:1 estimator:3 importantly:4 deriving:1 rocha:1 proving:1 handle:1 notion:1 coordinate:3 variation:2 annals:2 construction:6 ualberta:1 exact:1 programming:4 homogeneous:1 us:3 trend:1 expensive:1 observed:1 subproblem:1 solved:3 capture:1 worst:2 wj:2 ensures:1 cycle:1 trade:1 e8:1 rescaled:1 observes:1 removed:2 principled:1 ran:1 knight:1 complexity:3 ui:7 moderately:2 xinhua:2 nesterov:2 tacitly:1 dom:6 signature:1 solving:8 creates:2 bipartite:1 efficiency:3 completely:1 sink:1 compactly:2 basis:2 drineas:1 represented:2 corrective:3 regularizer:10 fast:3 effective:4 kp:16 saunders:1 whose:4 quite:2 larger:1 solve:1 say:1 otherwise:4 wg:7 compressed:1 statistic:4 gi:3 jointly:1 final:1 sequence:3 mg:7 reconstruction:1 remainder:1 fr:1 combining:2 iff:1 achieve:1 academy:1 inducing:2 hdk:1 frobenius:1 hgc:1 exploiting:2 convergence:5 empty:1 produce:1 perfect:1 illustrate:2 develop:4 fixing:1 acgh:1 ij:3 received:1 solves:1 implemented:3 c:4 involves:1 direction:3 guided:1 hull:2 centered:1 australia:1 require:2 suffices:2 proposition:6 hold:1 considered:3 exp:1 sought:1 adopt:1 dictionary:6 vary:1 smallest:1 purpose:1 consecutive:1 polar:48 estimation:3 achieves:1 applicable:1 combinatorial:5 label:4 uhlmann:1 grouped:1 create:1 gauge:9 establishes:1 weighted:1 minimization:4 always:1 gaussian:3 avoid:2 shrinkage:1 lifted:2 varying:1 encode:2 derived:1 vk:4 rank:1 greatly:1 cg:3 kim:1 typically:1 yaoliang:2 expand:1 wij:1 selects:1 i1:2 arg:7 classification:1 among:1 supw:1 dual:2 augment:1 overall:1 development:3 plan:1 smoothing:7 art:2 initialize:1 eldar:1 malick:1 constrained:2 once:1 atom:2 encouraged:1 biology:1 kw:4 represents:1 yu:4 k2f:1 icml:2 future:1 simplex:2 report:4 nonsmooth:3 piecewise:4 duplicate:1 few:2 metastasis:1 randomly:2 oriented:1 simultaneously:1 national:2 resulted:1 cheaper:1 beck:1 geometry:1 ab:1 friedman:1 interest:1 highly:1 investigate:2 mining:1 evaluation:3 mahoney:1 semidefinite:1 regularizers:21 hg:3 devoted:1 amenable:2 accurate:4 integral:4 edge:9 nowak:1 minw:2 respective:1 unless:2 tree:1 plugged:1 accommodated:1 initialized:1 desired:3 re:5 rn3:1 isolated:1 theoretical:2 pollack:1 instance:1 column:4 modeling:1 teboulle:1 lattice:2 cost:21 introducing:1 vertex:4 subset:7 entry:3 predictor:1 usefulness:2 too:2 characterize:1 proximal:8 synthetic:1 combined:1 rosset:1 thanks:1 recht:1 st:1 siam:1 lee:2 off:1 fused:13 gcg:45 quickly:1 again:3 squared:1 nm:1 successively:1 management:1 choose:2 possibly:1 huang:1 zhao:1 converted:1 prox:7 bfgs:2 de:2 parrilo:1 coding:10 wk:6 includes:1 coefficient:3 sec:1 rockafellar:1 mp:3 vi:1 depends:1 bg:2 performed:1 h1:3 root:1 closed:3 reg:1 competitive:1 recover:2 xing:1 capability:1 complicated:1 contribution:3 ni:1 accuracy:4 variance:1 efficiently:13 yield:3 identify:1 html:1 raw:1 metaxas:1 accurately:1 explain:1 whenever:1 definition:2 evaluates:1 naturally:1 proof:3 associated:1 degeneracy:2 gain:1 cur:4 sampled:1 recall:2 knowledge:1 fractional:3 improves:1 routine:3 sophisticated:3 uncover:1 jenatton:3 appears:2 tum:9 attained:1 supervised:1 methodology:1 improved:1 formulation:2 evaluated:3 box:1 shrink:1 just:2 kgb:1 hand:1 nonlinear:1 overlapping:3 maximizer:1 logistic:1 hal:2 ye:1 concept:1 requiring:1 true:1 verify:3 normalized:2 regularization:6 equality:2 hence:5 k22:2 nonzero:1 i2:1 reweighted:1 wrapped:1 during:1 encourages:1 noted:2 generalized:3 presenting:1 outline:1 plexity:1 demonstrate:5 recently:7 superior:1 common:1 specialized:1 kwa:1 exponentially:1 extend:4 significant:1 composition:1 cambridge:1 ai:2 dag:4 smoothness:1 pm:1 mathematics:1 pointed:1 submodular:6 entail:1 vijver:1 pu:13 base:2 add:1 showed:1 optimizing:2 belongs:2 inf:3 moderate:1 success:1 yi:3 seen:1 additional:2 dudik:1 converting:1 converge:1 shortest:1 signal:1 ii:1 full:1 multiple:1 needing:1 reduces:1 harchaoui:1 smooth:3 technical:3 faster:5 england:1 bach:6 molecular:1 variant:2 regression:2 breast:3 essentially:2 arxiv:1 iteration:5 tailored:1 mex:2 gforge:1 proposal:1 source:1 extra:1 unlike:2 posse:1 minb:1 kwi:1 induced:1 subject:2 n1i:1 flow:7 leveraging:1 effectiveness:1 obj:2 call:3 integer:1 structural:1 intermediate:3 iii:1 easy:7 enough:1 hb:9 iterate:1 marginalization:1 fit:1 hastie:2 lasso:20 restrict:1 identified:1 inner:1 idea:1 whether:2 motivated:1 expression:2 url:3 penalty:5 reformulated:2 matlab:2 useful:3 covered:1 clear:2 detailed:1 locally:1 dna:1 reduced:5 generate:2 specifies:1 http:4 exist:2 tutorial:1 per:1 dummy:1 tibshirani:3 promise:1 uia:1 express:2 group:28 key:2 four:1 reformulation:2 nevertheless:1 kept:1 imaging:1 graph:5 relaxation:2 padded:1 sum:1 run:2 package:2 inverse:2 family:2 throughout:1 almost:1 chandrasekaran:1 appendix:13 interleaved:2 apg:24 ki:2 bound:1 followed:1 quadratic:1 nonnegative:5 nontrivial:1 constraint:9 precisely:1 encodes:1 kwkp:1 argument:2 min:7 structured:21 department:1 developing:1 increasingly:1 son:1 wi:19 lp:10 projecting:1 invariant:1 taken:1 remains:1 turn:1 count:1 end:1 adopted:1 apply:2 observe:3 hierarchical:2 generic:2 appropriate:2 alternative:4 slower:3 chuang:1 denotes:2 remaining:2 include:1 running:1 hinge:1 maintaining:1 marginalized:4 xw:1 medicine:1 restrictive:1 k1:1 prof:1 classical:1 society:1 objective:18 question:1 added:1 laue:1 strategy:3 rt:1 usual:3 nr:3 enhances:1 gradient:10 exhibit:1 columnwise:1 concatenation:1 w0:1 polytope:5 considers:1 trivial:1 devel:1 willsky:1 assuming:1 length:2 minn:1 index:3 minimizing:2 equivalently:2 difficult:1 unfortunately:5 gk:5 trace:2 implementation:3 negated:1 allowing:1 observation:2 ideker:1 descent:3 defining:1 extended:1 srbct:2 rn:10 smoothed:2 canada:1 specified:1 optimized:1 established:1 nip:1 beyond:1 able:3 pattern:2 fp:6 sparsity:14 challenge:2 regime:1 program:3 built:1 interpretability:1 max:16 including:3 royal:1 power:1 overlap:2 difficulty:2 hybrid:1 regularized:1 indicator:1 mn:1 zhu:1 scheme:2 conic:1 review:1 ict:1 discovery:1 loss:5 interesting:2 limitation:2 acyclic:3 foundation:2 incident:1 t6g:1 s0:1 xp:1 thresholding:1 editor:1 share:1 row:4 cancer:3 course:2 proxk:2 last:2 copy:1 keeping:1 enjoys:1 side:1 allow:2 characterizing:1 face:1 taking:1 absolute:1 sparse:18 fg:6 van:2 dualization:1 dimension:2 evaluating:2 rich:2 dale:2 simplified:1 spam:4 far:2 reconstructed:1 approximate:2 compact:1 sj:1 implicitly:1 cutting:2 gene:5 mairal:5 assumed:2 gem:1 xi:2 alternatively:1 continuous:1 latent:12 search:3 iterative:2 sk:4 decomposes:1 nature:1 ca:1 schuurmans:2 improving:1 kui:2 hc:1 investigated:2 domain:7 aistats:1 dense:2 main:5 whole:1 motivation:1 noise:2 edition:1 n2:1 allowed:1 positively:1 edmonton:1 wiley:1 position:1 meyer:1 explicit:1 wish:1 sf:9 jmlr:4 hw:1 theorem:1 specific:1 maxi:1 sensing:1 dk:1 admits:1 survival:1 exists:2 adding:1 lifting:3 magnitude:4 anu:3 kx:2 smoothly:1 generalizing:1 led:2 simply:2 conveniently:2 expressed:5 pathwise:1 partially:1 scalar:4 springer:1 corresponds:4 satisfies:4 obozinski:4 conditional:4 goal:2 sorted:1 acceleration:1 lipschitz:2 included:2 determined:2 reducing:2 tumor:5 total:3 geer:1 schrijver:1 support:4 arises:1 accelerated:3 evaluate:1 princeton:1 tested:2 avoiding:1 |
4,349 | 4,936 | On the Linear Convergence of the Proximal Gradient
Method for Trace Norm Regularization
Ke Hou, Zirui Zhou, Anthony Man?Cho So
Department of Systems Engineering & Engineering Management
The Chinese University of Hong Kong
Shatin, N. T., Hong Kong
{khou,zrzhou,manchoso}@se.cuhk.edu.hk
Zhi?Quan Luo
Department of Electrical & Computer Engineering
University of Minnesota
Minneapolis, MN 55455, USA
[email protected]
Abstract
Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received
much attention lately. Currently, a popular method for solving such problem is
the proximal gradient method (PGM), which is known to have a sublinear rate of
convergence. In this paper, we show that for a large class of loss functions, the
convergence rate of the PGM is in fact linear. Our result is established without any
strong convexity assumption on the loss function. A key ingredient in our proof
is a new Lipschitzian error bound for the aforementioned trace norm?regularized
problem, which may be of independent interest.
1 Introduction
The problem of finding a low?rank matrix that (approximately) satisfies a given set of conditions
has recently generated a lot of interest in many communities. Indeed, such a problem arises in a
wide variety of applications, including approximation algorithms [17], automatic control [5], matrix
classification [20], matrix completion [6], multi?label classification [1], multi?task learning [2],
network localization [7], subspace learning [24], and trace regression [9], just to name a few. Due to
the combinatorial nature of the rank function, the task of recovering a matrix with the desired rank
and properties is generally intractable. To circumvent this, a popular approach is to use the trace
norm1 (also known as the nuclear norm) as a surrogate for the rank function. Such an approach is
quite natural, as the trace norm is the tightest convex lower bound of the rank function over the set
of matrices with spectral norm at most one [13]. In the context of machine learning, the trace norm
is typically used as a regularizer in the minimization of certain convex loss function. This gives rise
to convex optimization problems of the form
min
X?Rm?n
{F (X) = f (X) + ? kXk? } ,
(1)
where f : Rm?n ? R is the convex loss function, kXk? denotes the trace norm of X, and ? > 0
is a regularization parameter. By standard results in convex optimization [4], the above formulation
is tractable (i.e., polynomial?time solvable) for many choices of the loss function f . In practice,
1
Recall that the trace norm of a matrix is defined as the sum of its singular values.
1
however, one is often interested in settings where the decision variable X is of high dimension.
Thus, there has been much research effort in developing fast algorithms for solving (1) lately.
Currently, a popular method for solving (1) is the proximal gradient method (PGM), which exploits
the composite nature of the objective function F and certain smoothness properties of the loss function f [8, 19, 11]. The attractiveness of PGM lies not only in its excellent numerical performance,
but also in its strong theoretical convergence rate guarantees. Indeed, for the trace norm?regularized
problem (1) with f being convex and continuously differentiable and ?f being Lipschitz continuous, the standard PGM will achieve an additive error of O(1/k) in the optimal value after k iterations. Moreover, this error can be reduced to O(1/k 2 ) using acceleration techniques; see, e.g., [19].
The sublinear O(1/k 2 ) convergence rate is known to be optimal if f is simply given by a first?order
oracle [12]. On the other hand, if f is strongly convex, then the convergence rate can be improved to
O(ck ) for some c ? (0, 1) (i.e., a linear convergence rate) [16]. However, in machine learning, the
loss functions of interest are often highly structured and hence not just given by an oracle, but they
are not necessarily strongly convex either. For instance, in matrix completion, a commonly used loss
function is the square loss f (?) = kA(?) ? bk22 /2, where A : Rm?n ? Rp is a linear measurement
operator and b ? Rp is a given set of observations. Clearly, f is not strongly convex when A has a
non?trivial nullspace (or equivalently, when A is not injective). In view of this, it is natural to ask
whether linear convergence of the PGM can be established for a larger class of loss functions.
In this paper, we take a first step towards answering this question. Specifically, we show that when
the loss function f takes the form f (X) = h(A(X)), where A : Rm?n ? Rp is an arbitrary
linear operator and h : Rp ? R is strictly convex with certain smoothness and curvature properties,
the PGM for solving (1) has an asymptotic linear rate of convergence. Note that f need not be
strictly convex even if h is, as A is arbitrary. Our result covers a wide range of loss functions used
in the literature, such as square loss and logistic loss. Moreover, to the best of our knowledge, it
is the first linear convergence result concerning the application of a first?order method to the trace
norm?regularized problem (1) that does not require the strong convexity of f .
The key to our convergence analysis is a new Lipschitzian error bound for problem (1). Roughly,
it says that the distance between a point X ? Rm?n and the optimal solution set of (1) is on the
order of the residual norm kprox? (X ? ?f (X)) ? XkF , where prox? is the proximity operator
associated with the regularization term ? kXk? . Once we have such a bound, a routine application of the powerful analysis framework developed by Luo and Tseng [10] will yield the desired
linear convergence result. Prior to this work, Lipschitzian error bounds for composite function minimization are available for cases where the non?smooth part either has a polyhedral epigraph (such
as the ?1 ?norm) [23] or is the (sparse) group LASSO regularization [22, 25]. However, the question of whether a similar bound holds for trace norm regularization has remained open, despite its
apparent similarity to ?1 ?norm regularization. Indeed, unlike the ?1 ?norm, the trace norm has a non?
polyhedral epigraph; see, e.g., [18]. Moreover, the existing approach for establishing error bounds
for ?1 ?norm or (sparse) group LASSO regularization is based on splitting the decision variables into
groups, where variables from different groups do not interfere with one another, so that each group
can be analyzed separately. However, the trace norm of a matrix is determined by its singular values,
and each of them depends on every single entry of the matrix. Thus, we cannot use the same splitting approach to analyze the entries of the matrix. To overcome the above difficulties, we make the
? is an optimal solution to (1), then both X
? and ??f (X)
? have the same
crucial observation that if X
set of left and right singular vectors; see Proposition 4.2. As a result, we can use matrix perturbation
theory to get hold of the spectral structure of the points that are close to the optimal solution set. This
in turn allows us to establish a Lipschitzian error bound for the trace norm?regularized problem (1),
thereby resolving the aforementioned open question in the affirmative.
2 Preliminaries
2.1 Basic Setup
We consider the trace norm?regularized optimization problem (1), in which the loss function f :
Rm?n ? R takes the form
f (X) = h(A(X)),
(2)
where A : Rm?n ? Rp is a linear operator and h : Rp ? R is a function satisfying the following
assumptions:
2
Assumption 2.1
(a) The effective domain of h, denoted by dom(h), is open and non?empty.
(b) The function h is continuously differentiable with Lipschitz?continuous gradient on dom(h)
and is strongly convex on any convex compact subset of dom(h).
Note that Assumption 2.1(b) implies the strict convexity of h on dom(h) and the Lipschitz continuity
of ?f . Now, let X denote the set of optimal solutions to problem (1). We make the following
assumption concerning X :
Assumption 2.2 The optimal solution set X is non?empty.
The above assumptions can be justified in various applications. For instance, in matrix completion,
the square loss f (?) = kA(?) ? bk22 /2 induced by the linear measurement operator A and the set
of observations b ? Rp is of the form (2), with h(?) = k(?) ? bk22 /2. Moreover, it is clear that
such an h satisfies Assumptions 2.1 and 2.2. In multi?task learning, the loss function takes the form
P
f (?) = Tt=1 ?(At (?), yt ), where T is the number of learning tasks, At : Rm?n ? Rp is the linear
operator defined by the input data for the t?th task, yt ? Rp is the output data for the t?th task, and
? : Rp ? Rp ? R measures the learning error. Note that f can be put into the form (2), where
A : Rm?n ? RT p is given by A(X) = (A1 (X), A2 (X), . . . , AT (X)), and h : RT p ? R is
PT
given by h(z) = t=1 ?(zt , yt ) with zt ? Rp for t = 1, . . . , T and z = (z1 , . . . , zT ). Moreover,
2
in the case where
Pp ? is, say, the square loss (i.e., ?(zt , yt ) = kzt ? yt k2 /2) or the logistic loss (i.e.,
?(zt , yt ) = i=1 log(1 + exp(?zti yti ))), it can be verified that Assumptions 2.1 and 2.2 hold.
2.2 Some Facts about the Optimal Solution Set
Since f (?) = h(A(?)) by (2) and h(?) is strictly convex on dom(h) by Assumption 2.1(b), it is easy
to verify that the map X 7? A(X) is invariant over the optimal solution set X . In other words, there
exists a z? ? dom(h) such that for any X ? ? X , we have A(X ? ) = z?. Thus, we can express X as
X = X ? Rm?n : ? kXk? = v ? ? h(?
z ), A(X) = z? ,
where v ? > ?? is the optimal value of (1). In particular, X is a non?empty convex compact set.
? ? X onto X , which is given by the
This implies that every X ? Rm?n has a unique projection X
solution to the following optimization problem:
dist(X, X ) = min kX ? Y kF .
Y ?X
In addition, since X is bounded and F is convex, it follows from [14, Corollary 8.7.1] that the level
set {X ? Rm?n : F (X) ? ?} is bounded for any ? ? R.
2.3 Proximal Gradient Method and the Residual Map
To motivate the PGM for solving (1), we recall an alternative characterization of the optimal solution
set X . Consider the proximity operator prox? : Rm?n ? Rm?n , which is defined as
1
? kZk? + kX ? Zk2F .
prox? (X) = arg min
(3)
2
Z?Rm?n
By comparing the optimality conditions for (1) and (3), it is immediate that a solution X ? ? Rm?n
is optimal for (1) if and only if it satisfies the following fixed?point equation:
X ? = prox? (X ? ? ?f (X ? )).
This naturally lead to the following PGM for solving (1):
k+1
Y
= X k ? ?k ?f (X k ),
k+1
X
= prox? ?k (Y k+1 ),
(4)
(5)
where ?k > 0 is the step size in the k?th iteration, for k = 0, 1, . . .; see, e.g., [8, 19, 11]. As is
well?known, the proximity operator defined above can be expressed in terms of the so?called matrix
3
shrinkage operator. To describe this result, we introduce some definitions. Let ? > 0 be given. The
non?negative vector shrinkage operator s? : Rp+ ? Rp+ is defined as (s? (z))i = max{0, zi ? ?},
where i = 1, . . . , p. The matrix shrinkage operator S? : Rm?n ? Rm?n is defined as S? (X) =
U ?? V T , where X = U ?V T is the singular value decomposition of X with ? = Diag(?(X)) and
?(X) being the vector of singular values of X, and ?? = Diag(s? (?(X))). Then, it can be shown
that
prox? (X) = S? (X);
(6)
see, e.g., [11, Theorem 3].
Our goal in this paper is to study the convergence rate of the PGM (5). Towards that end, we need a
measure to quantify its progress towards optimality. One natural candidate would be dist(?, X ), the
distance to the optimal solution set X . Despite its intuitive appeal, such a measure is hard to compute
or analyze. In view of (4) and (6), a reasonable alternative would be the norm of the residual map
R : Rm?n ? Rm?n , which is defined as
R(X) = S? (X ? ?f (X)) ? X.
(7)
m?n
Intuitively, the residual map measures how much a solution X ? R
violates the optimality
condition (4). In particular, X is an optimal solution to (1) if and only if R(X) = 0. However, since
kR(?)kF is only a surrogate of dist(?, X ), we need to establish a relationship between them. This
motivates the development of a so?called error bound for problem (1).
3 Main Results
Key to our convergence analysis of the PGM (5) is the following error bound for problem (1), which
constitutes the main contribution of this paper:
Theorem 3.1 (Error Bound for Trace Norm Regularization) Suppose that in problem (1), f is of
the form (2), and Assumptions 2.1 and 2.2 are satisfied. Then, for any ? ? v ? , there exist constants
? > 0 and ? > 0 such that
dist(X, X ) ? ?kR(X)kF
whenever F (X) ? ?, kR(X)kF ? ?.
(8)
Armed with Theorem 3.1 and some standard properties of the PGM (5), we can apply the convergence analysis framework developed by Luo and Tseng [10] to establish the linear convergence of (5). Recall that a sequence of vectors {wk }k?0 is said to converge Q?linearly (resp. R?
linearly) to a vector w? if there exist an index K ? 0 and a constant ? ? (0, 1) such that
kwk+1 ? w? k2 /kwk ? w? k2 ? ? for all k ? K (resp. if there exist constants ? > 0 and ? ? (0, 1)
such that kwk ? w? k2 ? ? ? ?k for all k ? 0).
Theorem 3.2 (Linear Convergence of the Proximal Gradient Method) Suppose that in problem
(1), f is of the form (2), and Assumptions 2.1 and 2.2 are satisfied. Moreover, suppose that the step
size ?k in the PGM (5) satisfies 0 < ? < ?k < ?
? < 1/Lf for k = 0, 1, 2, . . ., where Lf is the
? are given constants. Then, the sequence of solutions {X k }k?0
Lipschitz constant of ?f , and ?, ?
generated by the PGM (5) converges R?linearly to an element in the optimal solution set X , and the
associated sequence of objective values {F (X k )}k?0 converges Q?linearly to the optimal value v ? .
Proof. Under the given setting, it can be shown that there exist scalars ?1 , ?2 , ?3 > 0, which depend
? , and Lf , such that
on ?, ?
F (X k ) ? F (X k+1 )
F (X
k+1
)?v
k
?
kR(X )kF
? ?1 kX k ? X k+1 k2F ,
? ?2 (dist(X k , X ))2 + kX k+1 ? X k k2F ,
k
? ?3 kX ? X
k
k+1
kF ;
(9)
(10)
(11)
see the supplementary material. Since {F (X )}k?0 is a monotonically decreasing sequence by (9)
and F (X k ) ? v ? for all k ? 0, we conclude, again by (9), that X k ? X k+1 ? 0. This, together
with (11), implies that R(X k ) ? 0. Thus, by (9), (10) and Theorem 3.1, there exist an index K ? 0
and a constant ?4 > 0 such that for all k ? K,
?4
F (X k+1 ) ? v ? ? ?4 kX k ? X k+1 k2F ?
(F (X k ) ? F (X k+1 )).
?1
4
It follows that
?4
(F (X k ) ? v ? ),
(12)
?1 + ?4
which establishes the Q?linear convergence of {F (X k )}k?0 to v ? . Using (9) and (12), we can
show that {kX k+1 ? X k k2F }k?0 converges R?linearly to 0, which, together with (11), implies that
{X k }k?0 converges R?linearly to a point in X ; see the supplementary material.
F (X k+1 ) ? v ? ?
4 Proof of the Error Bound
The structure of our proof of Theorem 3.1 largely follows that laid out in [22, Section 6]. However,
as explained in Section 1, some new ingredients are needed in order to analyze the spectral properties
of a point that is close to the optimal solution set X . Before we proceed, let us set up the notation
that will be used in the proof. Let L > 0 denote the Lipschitz constant of ?h and ?k ? k? denote the
subdifferential of k ? k? . Given a sequence {X k }k?0 ? Rm?n \X , define
? k = arg minY ?X kX k ? Y kF , ?k = kX k ? X
? k kF ,
Rk = R(X k ), X
k
k
k
k
?
k
?
?
z = A(X ), G = ?f (X ) = A (?h(z )), G = A (?h(?
z )),
(13)
?
where A is the adjoint operator of A. The crux of the proof of Theorem 3.1 is the following lemma:
Lemma 4.1 Under the setting of Theorem 3.1, suppose that there exists a convergent sequence
{X k }k?0 ? Rm?n \X satisfying
F (X k ) ? ? for all k ? 0,
Rk ? 0,
Rk
? 0.
?k
(14)
Then, the following hold:
? of {X k }k?0 belongs to X .
(a) (Asymptotic Optimality) The limit point X
(b) (Bounded Iterates) There exists a convex compact subset Z of dom(h) such that z k , z? ? Z
for all k ? 0. Consequently, there exists a constant ? ? (0, L] such that for all k ? 0,
(?h(z k ) ? ?h(?
z ))T (z k ? z?) ? ?kz k ? z?k22 .
(15)
(c) (Restricted Invertibility) There exists a constant ? > 0 such that
? k kF ? ?kz k ? z?k2 = ?kA(X k ? X
? k )k2
kX k ? X
for all k ? 0.
(16)
? k )k2 ? kAk ? kX k ? X
? k kF , where kAk = sup
It is clear that kA(X k ? X
kY kF =1 kA(Y )k2 is
the spectral norm of A. Thus, the key element in Lemma 4.1 is the restricted invertibility property
(16). For the sake of continuity, let us proceed to prove Theorem 3.1 by assuming the validity of
Lemma 4.1.
Proof. [Theorem 3.1] We argue by contradiction. Suppose that there exists ? ? v ? such that (8) fails
to hold for all ? > 0 and ? > 0. Then, there exists a sequence {X k }k?0 ? Rm?n \X satisfying (14).
Since {X ? Rm?n : F (X) ? ?} is bounded (see Section 2.2), by passing to a subsequence if
? Hence, the premises of Lemma 4.1
necessary, we may assume that {X k }k?0 converges to some X.
are satisfied. Now, by Fermat?s rule [15, Theorem 10.1], for each k ? 0,
(17)
Rk ? arg min hGk + Rk , Di + ? kX k + Dk? .
D
Hence, we have
? k ? X k i + ? kX
? k k? .
hGk + Rk , Rk i + ? kX k + Rk k? ? hGk + Rk , X
? k ? X and ?f (X
? k ) = G,
? we also have ?G
? ? ? ?kX
? k k? , which implies that
Since X
? k k? ? hG,
? X k + Rk ? X
? k i + ? kX k + Rk k? .
? kX
Adding the two inequalities above and simplifying yield
? Xk ? X
? k i + kRk k2 ? hG
? ? Gk , Rk i + hRk , X
? k ? X k i.
hGk ? G,
F
5
(18)
? k ), by Lemma 4.1(b,c),
Since z k = A(X k ) and z? = A(X
?
? k k2 . (19)
? Xk ? X
? k i = (?h(z k ) ? ?h(?
hGk ? G,
z ))T (z k ? z?) ? ?kz k ? z?k22 ? 2 kX k ? X
F
?
Hence, it follows from (15), (18), (19) and the Lipschitz continuity of ?h that
?
? k k2 + kRk k2 ? (?h(?
? k ? Xki
kX k ? X
z ) ? ?h(z k ))T A(Rk ) + hRk , X
F
F
?2
? k kF kRk kF + kX k ? X
? k kF kRk kF .
? LkAk2 kX k ? X
In particular, this implies that
?
? k k2 ? (LkAk2 + 1)kX k ? X
? k kF kRk kF
kX k ? X
F
?2
? k kF , yields a contradiction to (14).
for all k ? 0, which, upon dividing both sides by kX k ? X
4.1 Proof of Lemma 4.1
We now return to the proof of Lemma 4.1. Since Rk ? 0 by (14) and R is continuous, we have
? = 0, which implies that X
? ? X . This establishes (a). To prove (b), observe that due to (a), the
R(X)
k
sequence {X }k?0 is bounded. Hence, the sequence {A(X k )}k?0 is also bounded, which implies
? lie in a convex compact subset Z of dom(h) for all
that the points z k = A(X k ) and z? = A(X)
k ? 0. The inequality (15) then follows from Assumption 2.1(b). Note that we have ? ? L, as ?h
is Lipschitz continuous with parameter L.
To prove (c), we argue by contradiction. Suppose that (16) is false. Then, by further passing to a
subsequence if necessary, we may assume that
? k kF ? 0.
(20)
kA(X k ) ? z?k2 kX k ? X
In the sequel, we will also assume without loss of generality that m ? n. The following proposition
establishes a property of the optimal solution set X that will play a crucial role in our proof.
? ? X . Let X
? ?G
? = U
? [Diag(?
? ) 0] V? T be the singular
Proposition 4.2 Consider a fixed X
m?m
n?n
? ? G,
? where U
? ? R
value decomposition of X
, V? ? R
are orthogonal matrices and ?
?
? ? G.
? Then, the matrices X
? and ?G
? can be simultaneously
is the vector of singular values of X
? and V? . Moreover, the set Xc ? X , which is defined as
singular?value?decomposed by U
? [Diag(?(X)) 0] V? T ,
Xc = X ? X : X = U
is a non?empty convex compact set.
? k ? Xc onto Xc . Let
By Proposition 4.2, for every k ? 0, the point X k has a unique projection X
? k kF = min kX k ? Y kF .
?k = kX k ? X
(21)
Y ?Xc
? k kF ? kX k ? X
? k kF = ?k . It follows from (20) that
Since Xc ? X
, we have ?k = kX k ? X
k
k
k
?
kA(X ) ? z?k2 kX ? X kF ? 0. This is equivalent to A(Qk ) ? 0, where
?k
Xk ? X
Qk =
for all k ? 0.
(22)
?k
In particular, we have kQk kF = 1 for all k ? 0. By further passing to a subsequence if necessary,
? Clearly, we have A(Q)
? = 0 and kQk
? F = 1.
we will assume that {Qk }k?0 converges to some Q.
?
4.1.1 Decomposing Q
? =
Our goal now is to show that for k sufficiently large and ? > 0 sufficiently small, the point X
k
k
k
k
?
?
?
X + ?Q belongs to Xc and is closer to X than X is to X . This would then contradict the
? k is the projection of X k onto Xc . To begin, let ? k be the vector of singular values of
fact that X
k
k
? ? G,
? the sequence {? k }k?0 is bounded. Hence, for i = 1, . . . , m,
X ? G . Since X k ? Gk ? X
by passing to a subsequence if necessary, we can classify the sequence {?ik }k?0 into one of the
? k ) > 0 for all k ? 0; (C)
following three cases: (A) ?ik ? ? for all k ? 0; (B) ?ik > ? and ?i (X
k
k
? ) = 0 for all k ? 0. The following proposition gives the key structural properties
?i > ? and ?i (X
? that will lead to the desired contradiction:
of Q
6
? admits the decomposition Q
?=U
? [Diag(?) 0] V? T , where
Proposition 4.3 The matrix Q
?
? k)
?i (X
?
?
=
?
lim
? 0 in Case (A),
?
?
k??
?k
?i
for i = 1, . . . , m.
?
?R
in Case (B),
?
?
?
>0
in Case (C),
It should be noted that the decomposition given in Proposition 4.3 is not necessarily the singular
? as ? could have negative components. A proof of Proposition 4.3 can be
value decomposition of Q,
found in the supplementary material.
4.1.2 Completing the Proof
Armed with Proposition 4.3, we are now ready to complete the proof of Lemma 4.1(c). Since
? k , Qi
? > 0 for all k sufficiently large. Fix
Qk 6= 0 for all k ? 0, it follows from (22) that hX k ? X
k
?
?
?
? = 0,
any such k and let X = X + ?Q, where ? > 0 is a parameter to be determined. Since A(Q)
k
k
? = ?f (X
? ) = G.
? Moreover, since X
? ? Xc , by the optimality
it follows from (13) that ?f (X)
condition (4) and Proposition 4.2, we have
n
o
? k ) + ?i (?G)
? ? ? = ?i (X
? k ) for i = 1, . . . , m.
max 0, ?i (X
(23)
? satisfies
Now, we claim that for ? > 0 sufficiently small, X
? ? G)?
? vi = X
? v?i
S? (X
?
u
?Ti S? (X
? =
? G)
?
u
?Ti X
for i = 1, . . . , n,
(24)
for i = 1, . . . , m,
? (resp. V? ). This would then imply that X
? ? Xc . To prove
where u?i (resp. v?i ) is the i?th column of U
the claim, observe that for i = m + 1, . . . , n, both sides of (24) are equal to 0. Moreover, since
? k ? Xc , Propositions 4.2 and 4.3 give
X
? ?G
?=U
? Diag(?(X
? k ) + ?? + ?(?G))
?
X
0 V? T .
Thus, it suffices to show that for ? > 0 sufficiently small,
? k ) + ??i + ?i (?G)
? ?0
?i (X
?k
?k
? = ?i (X ) + ??i
s? (?i (X ) + ??i + ?i (?G))
for i = 1, . . . , m,
(25)
for i = 1, . . . , m.
(26)
Towards that end, fix an index i = 1, . . . , m and consider the three cases defined in Section 4.1.1:
? k ) = 0 for all k sufficiently large, then Proposition 4.3 gives ?i = 0. Moreover,
Case (A). If ?i (X
?
we have ?i (?G) ? ? by (23). This implies that both (25) and (26) are satisfied for any choice of
? k ) > 0 for all k sufficiently large, then Proposition 4.3 gives
? > 0. On the other hand, if ?i (X
?
? k ) + ??i ? 0, we
?i < 0. Moreover, we have ?i (?G) = ? by (23). By choosing ? > 0 so that ?i (X
can guarantee that both (25) and (26) are satisfied.
? k ) > 0 for all k ? 0, we have ?i (?G)
? = ? by (23). Hence, both (25) and (26)
Case (B). Since ?i (X
k
? ) + ??i ? 0.
can be satisfied by choosing ? > 0 so that ?i (X
? ? Xc . Since X k ? X
? and ?k = kX k ? X
? k kF ?
Case (C). By Proposition 4.2, we have X
k
k
k
? F , we have X
? ?X
? as well. It follows that ?i (X)
? = 0, as ?i (X
? ) = 0 for all k ? 0 by
kX ? Xk
? ?G
? and ? k > ? , we have ?
assumption. Now, since X k ? Gk ? X
?
?
?
. Thus, Proposition 4.2
i
i
? ? G)
? = ?i (X)
? + ?i (?G)
? = ?i (?G).
? This, together with (23), yields
implies that ? ? ?
?i = ?i (X
? = ? . Since ?i > 0 by Proposition 4.3, we conclude that both (25) and (26) can be satisfied
?i (?G)
by any choice of ? > 0.
? ? Xc . This, together with
Thus, in all three cases, the claim is established. In particular, we have X
k
k ?
?
?
hX ? X , Qi > 0 and kQkF = 1, yields
? 2F = kX k ? X
? k ? ?Qk
? 2F = kX k ? X
? k k2F ? 2?hX k ? X
? k , Qi
? + ?2 < kX k ? X
? k k2F
kX k ? Xk
? k is the projection of X k onto Xc . This
for ? > 0 sufficiently small, which contradicts the fact that X
completes the proof of Lemma 4.1(c).
7
5 Numerical Experiments
In this section, we complement our theoretical results by testing the numerical performance of the
PGM (5) on two problems: matrix completion and matrix classification.
Matrix Completion: We randomly generate an n ? n matrix M with a prescribed rank r. Then,
we fix a sampling ratio ? ? (0, 1] and sample p = ??n2 ? entries of M uniformly at random. This
induces a sampling operator P : Rm?n ? Rp and an observation vector b ? Rp . In our experiments,
we fix the rank r = 3 and use the square loss f (?) = kP(?) ? bk22 /2 with regularization parameter
? = 1 in problem (1). We then solve the resulting problem for different values of n and ? using the
PGM (5) with a fixed step size ? = 1. We stop the algorithm when F (X k ) ? F (X k+1 ) < 10?8 .
Figure 1 shows the semi?log plots of the error in objective value and the error in solution against the
number of iterations. It can be seen that as long as the iterates are close enough to the optimal set,
both the objective values and the solutions converge linearly.
0
10
?5
10
?10
0
200
400
600
800
1000
10
n=100
n=500
n=1000
0
10
?5
10
?10
10
0
50
100
Iterations
n = 1000
2
10
0
10
?2
10
?4
0
250
300
400
600
?10
10
0
200
400
800
1000
800
2
10
0
10
?2
10
0
1000
1200
Convergence Performance of Solution
1
n=100
n=500
n=1000
10
600
10
10
?=0.2
?=0.5
?=0.8
0
10
?1
10
?2
10
?3
10
?4
50
100
150
200
250
300
10
0
200
400
600
800
Iterations
Iterations
? = 0.3
n = 40
Iterations
n = 1000
?5
10
n = 40
?4
200
0
10
Iterations
Convergence Performance of Solution
4
?=0.1
?=0.3
?=0.5
Log(error of solution)
Log(error of solution)
10
10
200
?=0.2
?=0.5
?=0.8
? = 0.3
Convergence Performance of Solution
4
150
Convergence Performance of Objective Value
5
10
Iterations
Log(Error of Solution)
10
Convergence Performance of Objective Value
5
Log(error of objective value)
Log(error of objective value)
?=0.1
?=0.3
?=0.5
Log(Error of Objective Value)
Convergence Performance of Objective Value
5
10
1000
1200
Figure 2: Matrix Classification
Figure 1: Matrix Completion
Matrix Classification: We consider a matrix classification problem under the setting described
in [21]. Specifically, we first randomly generate a low-rank matrix classifier X ? , which is an n ? n
symmetric matrix of rank r. Then, we specify a sampling ratio ? ? (0, 1] and sample p = ??n2 ?/2
independent n ? n symmetric matrices W1 , . . . , Wp from the standard Wishart distribution with n
degrees of freedom. The label of Wi , denoted by yi , is given by sgn(hX ? , Wi i). In ourPexperiments,
p
we fix the rank r = 3, the dimension n = 40, and use the logistic loss f (?) =
i=1 log(1 +
exp(?yi h?, Wi i)) with regularization parameter ? = 1 in problem (1). Since a good lower bound
on the Lipschitz constant Lf of ?f is not readily available in this case, a backtracking line search
was adopted at each iteration to achieve an acceptable step size; see, e.g., [3]. We stop the algorithm
when F (X k ) ? F (X k+1 ) < 10?6 . Figure 2 shows the convergence performance of the PGM (5)
as ? varies. Again, it can be seen that both the objective values and the solutions converge linearly.
6 Conclusion
In this paper, we have established the linear convergence of the PGM for solving a class of trace
norm?regularized problems. Our convergence result does not require the objective function to be
strongly convex and is applicable to many settings in machine learning. The key technical tool in
the proof is a Lipschitzian error bound for trace norm?regularized problems, which could be of
independent interest. A future direction is to study error bounds for more general matrix norm?
regularized problems and their implications on the convergence rates of first?order methods.
Acknowledgments The authors would like to thank the anonymous reviewers for their careful
reading of the manuscript and insightful comments. The research of A. M.?C. So is supported in
part by a gift grant from Microsoft Research Asia.
8
References
[1] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering Shared Structures in Multiclass Classification.
In Proc. 24th ICML, pages 17?24, 2007.
[2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex Multi?Task Feature Learning. Mach. Learn., 73(3):243?
272, 2008.
[3] A. Beck and M. Teboulle. A Fast Iterative Shrinkage?Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci., 2(1):183?202, 2009.
[4] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: Analysis, Algorithms, and
Engineering Applications. MPS?SIAM Series on Optimization. Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania, 2001.
[5] M. Fazel, H. Hindi, and S. P. Boyd. A Rank Minimization Heuristic with Application to Minimum Order
System Approximation. In Proc. 2001 ACC, pages 4734?4739, 2001.
[6] D. Gross. Recovering Low?Rank Matrices from Few Coefficients in Any Basis. IEEE Trans. Inf. Theory,
57(3):1548?1566, 2011.
[7] S. Ji, K.-F. Sze, Z. Zhou, A. M.-C. So, and Y. Ye. Beyond Convex Relaxation: A Polynomial?Time
Non?Convex Optimization Approach to Network Localization. In Proc. 32nd IEEE INFOCOM, pages
2499?2507, 2013.
[8] S. Ji and J. Ye. An Accelerated Gradient Method for Trace Norm Minimization. In Proc. 26th ICML,
pages 457?464, 2009.
[9] V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear?Norm Penalization and Optimal Rates for
Noisy Low?Rank Matrix Completion. Ann. Stat., 39(5):2302?2329, 2011.
[10] Z.-Q. Luo and P. Tseng. Error Bounds and Convergence Analysis of Feasible Descent Methods: A
General Approach. Ann. Oper. Res., 46(1):157?178, 1993.
[11] S. Ma, D. Goldfarb, and L. Chen. Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization. Math. Program., 128(1?2):321?353, 2011.
[12] Yu. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Boston, 2004.
[13] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed Minimum?Rank Solutions of Linear Matrix Equations
via Nuclear Norm Minimization. SIAM Rev., 52(3):471?501, 2010.
[14] R. T. Rockafellar. Convex Analysis. Princeton Landmarks in Mathematics and Physics. Princeton University Press, Princeton, New Jersey, 1997.
[15] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis, volume 317 of Grundlehren der mathematischen Wissenschaften. Springer?Verlag, Berlin Heidelberg, second edition, 2004.
[16] M. Schmidt, N. Le Roux, and F. Bach. Convergence Rates of Inexact Proximal?Gradient Methods for
Convex Optimization. In Proc. NIPS 2011, pages 1458?1466, 2011.
[17] A. M.-C. So, Y. Ye, and J. Zhang. A Unified Theorem on SDP Rank Reduction. Math. Oper. Res.,
33(4):910?920, 2008.
[18] W. So. Facial Structures of Schatten p?Norms. Linear and Multilinear Algebra, 27(3):207?212, 1990.
[19] K.-C. Toh and S. Yun. An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized
Linear Least Squares Problems. Pac. J. Optim., 6(3):615?640, 2010.
[20] R. Tomioka and K. Aihara. Classifying Matrices with a Spectral Regularization. In Proc. of the 24th
ICML, pages 895?902, 2007.
[21] R. Tomioka, T. Suzuki, M. Sugiyama, and H. Kashima. A Fast Augmented Lagrangian Algorithm for
Learning Low?Rank Matrices. In Proc. 27th ICML, pages 1087?1094, 2010.
[22] P. Tseng. Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization. Math. Program., 125(2):263?295, 2010.
[23] P. Tseng and S. Yun. A Coordinate Gradient Descent Method for Nonsmooth Separable Minimization.
Math. Program., 117(1?2):387?423, 2009.
[24] M. White, Y. Yu, X. Zhang, and D. Schuurmans. Convex Multi?View Subspace Learning. In Proc. NIPS
2012, pages 1682?1690, 2012.
[25] H. Zhang, J. Jiang, and Z.-Q. Luo. On the Linear Convergence of a Proximal Gradient Method for a Class
of Nonsmooth Convex Minimization Problems. J. Oper. Res. Soc. China, 1(2):163?186, 2013.
9
| 4936 |@word kong:2 polynomial:2 norm:32 nd:1 open:3 decomposition:5 simplifying:1 thereby:1 reduction:1 series:1 existing:1 ka:7 comparing:1 optim:1 luo:5 toh:1 readily:1 hou:1 additive:1 numerical:3 plot:1 xk:5 characterization:1 iterates:2 math:4 zhang:3 ik:3 prove:4 introductory:1 polyhedral:2 introduce:1 indeed:3 roughly:1 dist:5 sdp:1 multi:5 zti:1 decreasing:1 decomposed:1 zhi:1 armed:2 gift:1 begin:1 moreover:11 bounded:7 notation:1 affirmative:1 developed:2 unified:1 finding:1 guarantee:2 every:3 ti:2 fink:1 rm:25 k2:15 classifier:1 control:1 grant:1 before:1 engineering:4 limit:1 despite:2 mach:1 jiang:1 establishing:1 approximately:1 koltchinskii:1 china:1 nemirovski:1 range:1 minneapolis:1 fazel:2 unique:2 acknowledgment:1 testing:1 practice:1 lf:4 pontil:1 composite:2 projection:4 boyd:1 word:1 get:1 cannot:1 close:3 onto:4 operator:13 put:1 context:1 equivalent:1 map:4 reviewer:1 yt:6 lagrangian:1 attention:1 convex:31 ke:1 roux:1 splitting:2 contradiction:4 rule:1 nuclear:4 coordinate:1 resp:4 pt:1 suppose:6 play:1 element:2 satisfying:3 role:1 electrical:1 gross:1 convexity:3 miny:1 nesterov:1 dom:8 motivate:1 depend:1 solving:7 algebra:1 localization:2 upon:1 basis:1 various:2 jersey:1 regularizer:1 fast:3 effective:1 describe:1 kp:1 choosing:2 quite:1 apparent:1 larger:1 supplementary:3 solve:1 say:2 heuristic:1 noisy:1 sequence:11 differentiable:2 achieve:2 adjoint:1 intuitive:1 ky:1 convergence:31 empty:4 converges:6 ben:1 completion:7 stat:1 received:1 progress:1 strong:3 dividing:1 recovering:2 soc:1 implies:10 quantify:1 direction:1 sgn:1 violates:1 material:3 require:2 premise:1 crux:1 hx:4 fix:5 suffices:1 preliminary:1 anonymous:1 proposition:16 multilinear:1 strictly:3 hold:5 proximity:3 sufficiently:8 exp:2 claim:3 a2:1 proc:8 applicable:1 wet:1 label:2 currently:2 combinatorial:1 establishes:3 tool:1 minimization:8 clearly:2 ck:1 zhou:2 shrinkage:4 corollary:1 rank:17 hk:1 industrial:1 typically:1 interested:1 arg:3 aforementioned:2 classification:7 uncovering:1 denoted:2 development:1 equal:1 once:1 evgeniou:1 sampling:3 yu:2 k2f:6 constitutes:1 icml:4 future:1 nonsmooth:2 few:2 modern:1 randomly:2 simultaneously:1 beck:1 microsoft:1 freedom:1 interest:4 highly:1 umn:1 analyzed:1 hg:2 implication:1 bregman:1 closer:1 injective:1 necessary:4 facial:1 orthogonal:1 desired:3 re:3 theoretical:2 instance:2 classify:1 column:1 teboulle:1 sze:1 cover:1 norm1:1 entry:3 subset:3 varies:1 proximal:8 cho:1 recht:1 siam:3 sequel:1 physic:1 together:4 continuously:2 w1:1 again:2 satisfied:7 management:1 wishart:1 return:1 ullman:1 oper:3 prox:6 parrilo:1 luozq:1 wk:1 invertibility:2 coefficient:1 rockafellar:2 mp:1 depends:1 vi:1 view:3 lot:1 infocom:1 analyze:3 kwk:3 sup:1 contribution:1 square:6 accuracy:1 qk:5 largely:1 yield:5 fermat:1 acc:1 whenever:1 definition:1 inexact:1 against:1 pp:1 naturally:1 proof:15 associated:2 di:1 stop:2 popular:3 ask:1 recall:3 knowledge:1 lim:1 routine:1 manuscript:1 asia:1 specify:1 improved:1 formulation:1 lounici:1 strongly:5 generality:1 just:2 hand:2 interfere:1 continuity:3 logistic:3 name:1 usa:1 validity:1 verify:1 ye:3 k22:2 regularization:12 hence:7 symmetric:2 wp:1 goldfarb:1 white:1 noted:1 kak:2 hong:2 yun:2 tt:1 complete:1 variational:1 recently:1 ji:2 volume:1 kluwer:1 mathematischen:1 measurement:2 bk22:4 smoothness:2 automatic:1 mathematics:2 sugiyama:1 minnesota:1 similarity:1 curvature:1 belongs:2 inf:1 certain:3 verlag:1 inequality:2 yi:2 der:1 seen:2 minimum:2 converge:3 cuhk:1 monotonically:1 semi:1 resolving:1 smooth:2 technical:1 academic:1 bach:1 long:1 concerning:2 a1:1 qi:3 regression:1 basic:2 iteration:10 justified:1 addition:1 subdifferential:1 separately:1 completes:1 singular:10 crucial:2 publisher:1 unlike:1 strict:1 comment:1 induced:1 grundlehren:1 quan:1 structural:1 easy:1 enough:1 variety:1 zi:1 pennsylvania:1 lasso:2 multiclass:1 shatin:1 whether:2 motivated:1 effort:1 proceed:2 passing:4 generally:1 se:1 clear:2 kqkf:1 tsybakov:1 induces:1 reduced:1 generate:2 exist:5 express:1 group:5 key:6 verified:1 kqk:2 imaging:1 relaxation:1 sum:1 inverse:1 powerful:1 laid:1 reasonable:1 decision:2 acceptable:1 bound:17 completing:1 guaranteed:1 convergent:1 oracle:2 pgm:18 sake:1 tal:1 min:5 optimality:5 xki:1 prescribed:1 separable:1 department:2 developing:1 structured:2 contradicts:1 wi:3 rev:1 aihara:1 intuitively:1 invariant:1 explained:1 restricted:2 equation:2 turn:1 needed:1 tractable:1 end:2 adopted:1 available:2 tightest:1 decomposing:1 apply:1 observe:2 spectral:5 kashima:1 alternative:2 schmidt:1 rp:16 xkf:1 denotes:1 lipschitzian:5 xc:14 exploit:1 chinese:1 establish:3 amit:1 society:1 objective:12 question:3 rt:2 surrogate:2 said:1 gradient:12 subspace:2 distance:2 thank:1 sci:1 berlin:1 landmark:1 schatten:1 argue:2 tseng:5 trivial:1 assuming:1 index:3 relationship:1 ratio:2 minimizing:1 equivalently:1 setup:1 gk:3 trace:20 negative:2 rise:1 zt:5 motivates:1 zk2f:1 observation:4 descent:2 immediate:1 perturbation:1 arbitrary:2 community:1 complement:1 z1:1 established:4 nip:2 trans:1 beyond:1 reading:1 program:3 including:1 max:2 natural:3 difficulty:1 regularized:9 circumvent:1 solvable:1 residual:4 hindi:1 mn:1 imply:1 lately:2 ready:1 philadelphia:1 prior:1 literature:1 kf:26 asymptotic:2 loss:23 lecture:2 sublinear:2 srebro:1 ingredient:2 penalization:1 degree:1 thresholding:1 classifying:1 course:1 supported:1 side:2 wide:2 sparse:2 overcome:1 dimension:2 kzk:1 kz:3 author:1 commonly:1 suzuki:1 compact:5 contradict:1 conclude:2 subsequence:4 continuous:4 search:1 iterative:2 nature:2 learn:1 hrk:2 schuurmans:1 heidelberg:1 excellent:1 necessarily:2 anthony:1 domain:1 diag:6 krk:5 wissenschaften:1 main:2 linearly:8 edition:1 n2:2 augmented:1 epigraph:2 attractiveness:1 tomioka:2 fails:1 kzt:1 lie:2 candidate:1 answering:1 nullspace:1 theorem:12 remained:1 rk:14 hgk:5 pac:1 insightful:1 appeal:1 dk:1 admits:1 intractable:1 exists:7 false:1 adding:1 kr:4 kx:36 chen:1 boston:1 backtracking:1 simply:1 kxk:4 expressed:1 scalar:1 springer:1 satisfies:5 ma:1 goal:2 acceleration:1 consequently:1 towards:4 careful:1 ann:2 lipschitz:8 man:1 yti:1 hard:1 shared:1 feasible:1 specifically:2 determined:2 uniformly:1 manchoso:1 lemma:10 called:2 ece:1 arises:1 accelerated:2 princeton:3 argyriou:1 |
4,350 | 4,937 | Accelerating Stochastic Gradient Descent using
Predictive Variance Reduction
Rie Johnson
RJ Research Consulting
Tarrytown NY, USA
Tong Zhang
Baidu Inc., Beijing, China
Rutgers University, New Jersey, USA
Abstract
Stochastic gradient descent is popular for large scale optimization but has slow
convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient
descent which we call stochastic variance reduced gradient (SVRG). For smooth
and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic
Average Gradient (SAG). However, our analysis is significantly simpler and more
intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as
some structured prediction problems and neural network learning.
1
Introduction
In machine learning, we often encounter the following optimization problem. Let ?1 , . . . , ?n be
a sequence of vector functions from Rd to R. Our goal is to find an approximate solution of the
following optimization problem
n
min P (w),
P (w) :=
1X
?i (w).
n i=1
(1)
For example, given a sequence of n training examples (x1 , y1 ), . . . , (xn , yn ), where xi ? Rd and
yi ? R, if we use the squared loss ?i (w) = (w> xi ? yi )2 , then we can obtain least squares
regression. In this case, ?i (?) represents a loss function in machine learning. One may also include
regularization conditions. For example, if we take ?i (w) = ln(1 + exp(?w> xi yi )) + 0.5?w> w
(yi ? {?1}), then the optimization problem becomes regularized logistic regression.
A standard method is gradient descent, which can be described by the following update rule for
t = 1, 2, . . .
w(t) = w(t?1) ? ?t ?P (w(t?1) ) = w(t?1) ?
n
?t X
??i (w(t?1) ).
n i=1
(2)
However, at each step, gradient descent requires evaluation of n derivatives, which is expensive. A
popular modification is stochastic gradient descent (SGD): where at each iteration t = 1, 2, . . ., we
draw it randomly from {1, . . . , n}, and
w(t) = w(t?1) ? ?t ??it (w(t?1) ).
(3)
The expectation E[w(t) |w(t?1) ] is identical to (2). A more general version of SGD is the following
w(t) = w(t?1) ? ?t gt (w(t?1) , ?t ),
1
(4)
where ?t is a random variable that may depend on w(t?1) , and the expectation (with respect to ?t )
E[gt (w(t?1) , ?t )|w(t?1) ] = ?P (w(t?1) ). The advantage of stochastic gradient is that each step
only relies on a single derivative ??i (?), and thus the computational cost is 1/n that of the standard
gradient descent. However, a disadvantage of the method is that the randomness introduces variance
? this is caused by the fact that gt (w(t?1) , ?t ) equals the gradient ?P (w(t?1) ) in expectation, but
each gt (w(t?1) , ?t ) is different. In particular, if gt (w(t?1) , ?t ) is large, then we have a relatively
large variance which slows down the convergence. For example, consider the case that each ?i (w)
is smooth
?i (w) ? ?i (w0 ) ? 0.5Lkw ? w0 k2 ? ??i (w0 )> (w ? w0 ),
(5)
and convex; and P (w) is strongly convex
P (w) ? P (w0 ) ? 0.5?kw ? w0 k22 ? ?P (w0 )> (w ? w0 ),
(6)
where L ? ? ? 0. As long as we pick ?t as a constant ? < 1/L, we have linear convergence of
O((1 ? ?/L)t ) Nesterov [2004]. However, for SGD, due to the variance of random sampling, we
generally need to choose ?t = O(1/t) and obtain a slower sub-linear convergence rate of O(1/t).
This means that we have a trade-off of fast computation per iteration and slow convergence for
SGD versus slow computation per iteration and fast convergence for gradient descent. Although
the fast computation means it can reach an approximate solution relatively quickly, and thus has
been proposed by various researchers for large scale problems Zhang [2004], Shalev-Shwartz et al.
[2007] (also see Leon Bottou?s Webpage http://leon.bottou.org/projects/sgd), the
convergence slows down when we need a more accurate solution.
In order to improve SGD, one has to design methods that can reduce the variance, which allows
us to use a larger learning rate ?t . Two recent papers Le Roux et al. [2012], Shalev-Shwartz and
Zhang [2012] proposed methods that achieve such a variance reduction effect for SGD, which leads
to a linear convergence rate when ?i (w) is smooth and strongly convex. The method in Le Roux
et al. [2012] was referred to as SAG (stochastic average gradient), and the method in Shalev-Shwartz
and Zhang [2012] was referred to as SDCA. These methods are suitable for training convex linear
prediction problems such as logistic regression or least squares regression, and in fact, SDCA is the
method implemented in the popular lib-SVM package Hsieh et al. [2008]. However, both proposals require storage of all gradients (or dual variables). Although this issue may not be a problem
for training simple regularized linear prediction problems such as least squares regression, the requirement makes it unsuitable for more complex applications where storing all gradients would be
impractical. One example is training certain structured learning problems with convex loss, and another example is training nonconvex neural networks. In order to remedy the problem, we propose a
different method in this paper that employs explicit variance reduction without the need to store the
intermediate gradients. We show that if ?i (w) is strongly convex and smooth, then the same convergence rate as those of Le Roux et al. [2012], Shalev-Shwartz and Zhang [2012] can be obtained.
Even if ?i (w) is nonconvex (such as neural networks), under mild assumptions, it can be shown that
asymptotically the variance of SGD goes to zero, and thus faster convergence can be achieved. In
summary, this work makes the following three contributions:
? Our method does not require the storage of full gradients, and thus is suitable for some
problems where methods such as Le Roux et al. [2012], Shalev-Shwartz and Zhang [2012]
cannot be applied.
? We provide a much simpler proof of the linear convergence results for smooth and strongly
convex loss, and our view provides a significantly more intuitive explanation of the fast
convergence by explicitly connecting the idea to variance reduction in SGD. The resulting
insight can easily lead to additional algorithmic development.
? The relatively intuitive variance reduction explanation also applies to nonconvex optimization problems, and thus this idea can be used for complex problems such as training deep
neural networks.
2
Stochastic Variance Reduced Gradient
One practical issue for SGD is that in order to ensure convergence the learning rate ?t has to decay
to zero. This leads to slower convergence. The need for a small learning rate is due to the variance
2
of SGD (that is, SGD approximates the full gradient using a small batch of samples or even a single
example, and this introduces variance). However, there is a fix described below. At each time, we
keep a version of estimated w as w
? that is close to the optimal w. For example, we can keep a
snapshot of w
? after every m SGD iterations. Moreover, we maintain the average gradient
n
1X
?
? = ?P (w)
? =
??i (w),
?
n i=1
and its computation requires one pass over the data using w.
? Note that the expectation of ??i (w)
? ?
?
? over i is zero, and thus the following update rule is generalized SGD: randomly draw it from
{1, . . . , n}:
w(t) = w(t?1) ? ?t (??i (w(t?1) ) ? ??it (w)
? +?
?).
(7)
We thus have
E[w(t) |w(t?1) ] = w(t?1) ? ?t ?P (w(t?1) ).
That is, if we let the random variable ?t = it and gt (w(t?1) , ?t ) = ??it (w(t?1) ) ? ??it (w)
? +?
?,
then (7) is a special case of (4).
The update rule in (7) can also be obtained by defining the auxiliary function
??i (w) = ?i (w) ? (??i (w)
? ??
?)> w.
Pn
Since i=1 (??i (w)
? ??
?) = 0, we know that
n
n
1X?
1X
?i (w) =
?i (w).
P (w) =
n i=1
n i=1
Now we may apply the standard SGD to the new representation P (w) =
the update rule (7).
1
n
Pn
i=1
??i (w) and obtain
To see that the variance of the update rule (7) is reduced, we note that when both w
? and w(t) converge
to the same parameter w? , then ?
? ? 0. Therefore if ??i (w)
? ? ??i (w? ), then
??i (w(t?1) ) ? ??i (w)
? +?
? ? ??i (w(t?1) ) ? ??i (w? ) ? 0.
This argument will be made more rigorous in the next section, where we will analyze the algorithm
in Figure 1 that summarizes the ideas described in this section. We call this method stochastic
variance reduced gradient (SVRG) because it explicitly reduces the variance of SGD. Unlike SGD,
the learning rate ?t for SVRG does not have to decay, which leads to faster convergence as one can
use a relatively large learning rate. This is confirmed by our experiments.
In practical implementations, it is natural to choose option I, or take w
?s to be the average of the
past t iterates. However, our analysis depends on option II. Note that each stage s requires 2m + n
gradient computations (for some convex problems, one may save the intermediate gradients ??i (w),
?
and thus only m + n gradient computations are needed). Therefore it is natural to choose m to
be the same order of n but slightly larger (for example m = 2n for convex problems and m =
5n for nonconvex problems in our experiments). In comparison, standard SGD requires only m
gradient computations. Since gradient may be the computationally most intensive operation, for fair
comparison, we compare SGD to SVRG based on the number of gradient computations.
3
Analysis
For simplicity we will only consider the case that each ?i (w) is convex and smooth, and P (w) is
strongly convex.
Theorem 1. Consider SVRG in Figure 1 with option II. Assume that all ?i are convex and both (5)
and (6) hold with ? > 0. Let w? = arg minw P (w). Assume that m is sufficiently large so that
2L?
1
+
< 1,
?=
??(1 ? 2L?)m 1 ? 2L?
then we have geometric convergence in expectation for SVRG:
E P (w
?s ) ? E P (w? ) + ?s [P (w
?0 ) ? P (w? )]
3
Procedure SVRG
Parameters update frequency m and learning rate ?
Initialize w
?0
Iterate: for s = 1, 2, . . .
w
?=w
?s?1
Pn
?
? = n1 i=1 ??i (w)
?
w0 = w
?
Iterate: for t = 1, 2, . . . , m
Randomly pick it ? {1, . . . , n} and update weight
wt = wt?1 ? ?(??it (wt?1 ) ? ??it (w)
? +?
?)
end
option I: set w
? s = wm
option II: set w
?s = wt for randomly chosen t ? {0, . . . , m ? 1}
end
Figure 1: Stochastic Variance Reduced Gradient
Proof. Given any i, consider
gi (w) = ?i (w) ? ?i (w? ) ? ??i (w? )> (w ? w? ).
We know that gi (w? ) = minw gi (w) since ?gi (w? ) = 0. Therefore
0 = gi (w? ) ? min[gi (w ? ??gi (w))]
?
? min[gi (w) ? ?k?gi (w)k22 + 0.5L? 2 k?gi (w)k22 ] = gi (w) ?
?
1
k?gi (w)k22 .
2L
That is,
k??i (w) ? ??i (w? )k22 ? 2L[?i (w) ? ?i (w? ) ? ??i (w? )> (w ? w? )].
By summing the above inequality over i = 1, . . . , n, and using the fact that ?P (w? ) = 0, we obtain
n?1
n
X
k??i (w) ? ??i (w? )k22 ? 2L[P (w) ? P (w? )].
(8)
i=1
We can now proceed to prove the theorem. Let vt = ??it (wt?1 ) ? ??it (w)
? +?
?. Conditioned on
wt?1 , we can take expectation with respect to it , and obtain:
E kvt k22
?2 E k??it (wt?1 ) ? ??it (w? )k22 + 2 E k[??it (w)
? ? ??it (w? )] ? ?P (w)k
? 22
=2 E k??it (wt?1 ) ? ??it (w? )k22 + 2 E k[??it (w)
? ? ??it (w? )]
? E [??it (w)
? ? ??it (w? )]k22
?2 E k??it (wt?1 ) ? ??it (w? )k22 + 2 E k??it (w)
? ? ??it (w? )k22
?4L[P (wt?1 ) ? P (w? ) + P (w)
? ? P (w? )].
? = ?P (w).
? The second inequality uses
The first inequality uses ka + bk22 ? 2kak22 + 2kbk22 and ?
E k? ? E ?k22 = E k?k22 ? k E ?k22 ? E k?k22 for any random vector ?. The third inequality uses (8).
Now by noticing that conditioned on wt?1 , we have E vt = ?P (wt?1 ); and this leads to
E kwt ? w? k22
=kwt?1 ? w? k22 ? 2?(wt?1 ? w? )> E vt + ? 2 E kvt k22
?kwt?1 ? w? k22 ? 2?(wt?1 ? w? )> ?P (wt?1 ) + 4L? 2 [P (wt?1 ) ? P (w? ) + P (w)
? ? P (w? )]
?kwt?1 ? w? k22 ? 2?[P (wt?1 ) ? P (w? )] + 4L? 2 [P (wt?1 ) ? P (w? ) + P (w)
? ? P (w? )]
=kwt?1 ? w? k22 ? 2?(1 ? 2L?)[P (wt?1 ) ? P (w? )] + 4L? 2 [P (w)
? ? P (w? )].
4
The first inequality uses the previously obtained inequality for E kvt k22 , and the second inequality
convexity of P (w), which implies that ?(wt?1 ? w? )> ?P (wt?1 ) ? P (w? ) ? P (wt?1 ).
We consider a fixed stage s, so that w
? = w
?s?1 and w
?s is selected after all of the updates have
completed. By summing the previous inequality over t = 1, . . . , m, taking expectation with all the
history, and using option II at stage s, we obtain
E kwm ? w? k22 + 2?(1 ? 2L?)m E [P (w
?s ) ? P (w? )]
? E kw0 ? w? k22 + 4Lm? 2 E[P (w)
? ? P (w? )]
= E kw
? ? w? k22 + 4Lm? 2 E[P (w)
? ? P (w? )]
2
? E[P (w)
? ? P (w? )] + 4Lm? 2 E[P (w)
? ? P (w? )]
?
=2(? ?1 + 2Lm? 2 ) E[P (w)
? ? P (w? )].
The second inequality uses the strong convexity property (6). We thus obtain
2L?
1
+
E[P (w
?s?1 ) ? P (w? )].
E [P (w
?s ) ? P (w? )] ?
??(1 ? 2L?)m 1 ? 2L?
This implies that E [P (w
?s ) ? P (w? )] ? ?s E [P (w
?0 ) ? P (w? )]. The desired bound follows.
The bound we obtained in Theorem 1 is comparable to those obtained in Le Roux et al. [2012],
Shalev-Shwartz and Zhang [2012] (if we ignore the log factor). To see this, we may consider for
simplicity the most indicative case where the condition number L/? = n. Due to the poor condition
number, the standard batch gradient descent requires complexity of n ln(1/) iterations over the
data to achieve accuracy of , which means we have to process n2 ln(1/) number of examples. In
comparison, in our procedure we may take ? = 0.1/L and m = O(n) to obtain a convergence rate of
? = 1/2. Therefore to achieve an accuracy of , we need to process n ln(1/) number of examples.
This matches the results of Le Roux et al. [2012], Shalev-Shwartz and Zhang [2012]. Nevertheless,
our analysis is significantly simpler than both Le Roux et al. [2012] and Shalev-Shwartz and Zhang
[2012], and the explicit variance reduction argument provides better intuition on why this method
works. In fact, in Section 4 we show that a similar intuition can be used to explain the effectiveness
of SDCA.
The SVRG algorithm can also be applied to smooth but not strongly convex problems. A convergence
? rate of O(1/T ) may be obtained, which improves the standard SGD convergence rate of
O(1/ T ). In order to apply SVRG to nonconvex problems such as neural networks, it is useful
to start with an initial vector w
?0 that is close to a local minimum (which may be obtained with
SGD), and then the method can be used to accelerate the local convergence rate of SGD (which may
converge very slowly by itself). If the system is locally (strongly) convex, then Theorem 1 can be
directly applied, which implies local geometric convergence rate with a constant learning rate.
4
SDCA as Variance Reduction
It can be shown that both SDCA and SAG are connected to SVRG in the sense they are also a
variance reduction methods for SGD, although using different techniques. In the following we
present the variance reduction view of SDCA, which provides additional insights into these recently
proposed fast convergence methods for stochastic optimization. In SDCA, we consider the following
problem with convex ?i (w):
n
w? = arg min P (w),
P (w) =
1X
?i (w) + 0.5?w> w.
n i=1
This is the same as our formulation with ?i (w) = ?i (w) + 0.5?w> w.
We can take the derivative of (9) and derive a ?dual? representation of w at the solution w? as:
w? =
n
X
?i?
(j = 1, . . . , k),
i=1
5
(9)
where the dual variables
1
??i (w? ).
?n
Therefore in the SGD update (3), if we maintain a representation
?i? = ?
w(t) =
n
X
(t)
?i ,
(10)
(11)
i=1
then the update of ? becomes:
(t)
?`
(
(t?1)
(1 ? ?t ?)?i
? ?t ??i (w) ` = i
=
.
(t?1)
(1 ? ?t ?)?`
`=
6 i
(12)
This update rule requires ?t ? 0 when t ? ?.
Alternatively, we may consider starting with SGD by maintaining (11), and then apply the following
Dual Coordinate Ascent rule:
(
(t?1)
(t?1)
?i
? ?t (??i (w(t?1) ) + ?n?i
) `=i
(t)
?` =
(j = 1, . . . , k)
(13)
(t?1)
?`
` 6= i
(t)
(t?1)
and then update w as w(t) = w(t?1) + (?i ? ?i
).
It can be checked that if we take expectation over random i ? {1, . . . , n}, then the SGD rule in (12)
and the dual coordinate ascent rule (13) both yield the gradient descent rule
E[w(t |w(t?1) ] = w(t?1) ? ?t ?P (w(t?1) ).
Therefore both can be regarded as different realizations of the more general stochastic gradient rule
in (4). However, the advantage of (13) is that we may take a larger step when t ? ?. This is because according to (10), when the primal-dual parameters (w, ?) converge to the optimal parameters
(w? , ?? ), we have
(??i (w) + ?n?i ) ? 0,
which means that even if the learning rate ?t stays bounded away from zero, the procedure can
converge. This is the same effect as SVRG, in the sense that the variance goes to zero asymptotically:
as w ? w? and ? ? ?? , we have
n
1X
(??i (w) + ?n?i )2 ? 0.
n i=1
That is, SDCA is also a variance reduction method for SGD, which is similar to SVRG.
From this discussion, we can view SVRG as an explicit variance reduction technique for SGD which
is similar to SDCA. However, it is simpler, more intuitive, and easier to analyze. This relationship
provides useful insights into the underlying optimization problem that may allow us to make further
improvements.
5
Experiments
To confirm the theoretical results and insights, we experimented with SVRG (Fig. 1 Option I) in
comparison with SGD and SDCA with linear predictors (convex) and neural nets (nonconvex). In
all the figures, the x-axis is computational cost measured by the number of gradient computations
divided by n. For SGD, it is the number of passes to go through the training data, and for SVRG
in the nonconvex case (neural nets), it includes the additional computation of ??i (w)
? both in each
iteration and for computing the gradient average ?
?. For SVRG in our convex case, however, ??i (w)
?
does not have to be re-computed in each iteration. Since in this case the gradient is always a multiple
of xi , i.e., ??i (w) = ?0i (w> xi )xi where ?i (w) = ?i (w> xi ), ??i (w)
? can be compactly saved in
memory by only saving scalars ?0i (w
? > xi ) with the same memory consumption as SDCA and SAG.
The interval m was set to 2n (convex) and 5n (nonconvex). The weights for SVRG were initialized
by performing 1 iteration (convex) or 10 iterations (nonconvex) of SGD; therefore, the line for
SVRG starts after x = 1 (convex) or x = 10 (nonconvex) in the respective figures.
6
0.27
0.26
0.25
0
50
#grad / n
100
(c)
1E+02 MNIST convex: training loss residual
P(w)-P(w*)
1E-01
1E-04
1E-07
1E-10
1E-13
50
100
0
#grad / n
SVRG
SDCA
SGD-best
SGD:0.001
Variance
(b
(b)
Training loss - optimum
Training loss
(a)
0.31 MNIST convex: training loss: P(w)
SGD:0.005
0.3
SGD:0.0025
0.29
SGD:0.001
SVRG:0.025
0.28
MNIST convex: update variance
1E+02
1E-01
1E-04
1E-07
1E-10
1E-13
1E-16
0
50
#grad / n
SVRG
SGD-best
SGD-best/?(t)
100
SDCA
SGD:0.001
Figure 2: Multiclass logistic regression (convex) on MNIST. (a) Training loss comparison with SGD with
fixed learning rates. The numbers in the legends are the learning rate. (b) Training loss residual P (w)?P (w? );
comparison with best-tuned SGD with learning rate scheduling and SDCA. (c) Variance of weight update
(including multiplication with the learning rate).
First, we performed L2-regularized multiclass logistic regression (convex optimization) on MNIST1
with regularization parameter ? =1e-4. Fig. 2 (a) shows training loss (i.e., the optimization objective
P (w)) in comparison with SGD with fixed learning rates. The results are indicative of the known
weakness of SGD, which also illustrates the strength of SVRG. That is, when a relatively large
learning rate ? is used with SGD, training loss drops fast at first, but it oscillates above the minimum
and never goes down to the minimum. With small ?, the minimum may be approached eventually,
but it will take many iterations to get there. Therefore, to accelerate SGD, one has to start with
relatively large ? and gradually decrease it (learning rate scheduling), as commonly practiced. By
contrast, using a single relatively large value of ?, SVRG smoothly goes down faster than SGD.
This is in line with our theoretical prediction that one can use a relatively large ? with SVRG, which
leads to faster convergence.
Fig. 2 (b) and (c) compare SVRG with best-tuned SGD with learning rate scheduling and SDCA.
?SGD-best? is the best-tuned SGD, which was chosen by preferring smaller training loss from a
large number of parameter combinations for two types of learning scheduling: exponential decay
?(t) = ?0 abt/nc with parameters ?0 and a to adjust and t-inverse ?(t) = ?0 (1 + bbt/nc)?1 with ?0
and b to adjust. (Not surprisingly, the best-tuned SGD with learning rate scheduling outperformed
the best-tuned SGD with a fixed learning rate throughout our experiments.) Fig. 2 (b) shows training
loss residual, which is training loss minus the optimum (estimated by running gradient descent for
a very long time): P (w) ? P (w? ). We observe that SVRG?s loss residual goes down exponentially,
which is in line with Theorem 1, and that SVRG is competitive with SDCA (the two lines are almost
overlapping) and decreases faster than SGD-best. In Fig. 2 (c), we show the variance of SVRG
update ??(??i (w) ? ??i (w)
? +?
?) in comparison with the variance of SGD update ??(t)??i (w)
and SDCA. As expected, the variance of both SVRG and SDCA decreases as optimization proceeds,
and the variance of SGD with a fixed learning rate (?SGD:0.001?) stays high. The variance of the
best-tuned SGD decreases, but this is due to the forced exponential decay of the learning rate and
the variance of the gradients ??i (w) (the dotted line labeled as ?SGD-best/?(t)?) stays high.
Fig. 3 shows more convex-case results (L2-regularized logistic regression) in terms of training loss
residual (top) and test error rate (bottom) on rcv1.binary and covtype.binary from the LIBSVM site2 ,
protein3 , and CIFAR-104 . As protein and covtype do not come with labeled test data, we randomly
split the training data into halves to make the training/test split. CIFAR was normalized into [0, 1] by
division with 255 (which was also done with MNIST and CIFAR in the other figures), and protein
was standardized. ? was set to 1e-3 (CIFAR) and 1e-5 (rest). Overall, SVRG is competitive with
SDCA and clearly more advantageous than the best-tuned SGD. It is also worth mentioning that a
recent study Schmidt et al. [2013] reports that SAG and SDCA are competitive.
To test SVRG with nonconvex objectives, we trained neural nets (with one fully-connected hidden
layer of 100 nodes and ten softmax output nodes; sigmoid activation and L2 regularization) with
mini-batches of size 10 on MNIST and CIFAR-10, both of which are standard datasets for deep
1
2
3
4
http://yann.lecun.com/exdb/mnist/
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
http://osmot.cs.cornell.edu/kddcup/datasets.html
www.cs.toronto.edu/?kriz/cifar.html
7
1E-12
0
10
20
#grad / n
1E-5
1E-6
rcv1 convex
0.05
0.04
1E-12
1E-14
0
30
protein convex
0.006
SGD-best
SDCA
SVRG
0.045
0
30
0.004
0
10
20
#grad / n
30
0.25
0.24
0
10
20
#grad / n
30
SGD-best
SDCA
SVRG
50
#grad / n
100
CIFAR10 convex
0.66
0.245
0.002
0
30
SGD-best
SDCA
SVRG
0.255
0.003
0.035
10
20
#grad / n
cover type convex
0.26
SGD-best
SDCA
SVRG
0.005
SGD-best
SDCA
SVRG
1E-6
1E-8
1E-10
Test error rate
SGD-best
SDCA
SVRG
SGD-best
SDCA
SVRG
10
20
#grad / n
1E-4
CIFAR10 convex
1E+00
1E-02
1E-04
1E-06
1E-08
1E-10
1E-12
training loss - optimum
1E-4
1E-8
1E-2
Test error rate
training loss - optimum
training loss - optimum
cover type convex
1E-3
1E-6
1E-10
Test error rate
protein convex
1E-2
1E-4
Test error rate
training loss - optimum
rcv1 convex
1E-2
SGD-best
SDCA
SVRG
0.64
0.62
0.6
0.58
0
10
20
#grad / n
30
0
50
100
#grad / n
SGD-best/?(t)
SGD-best
SVRG
1E-2
1E-3
1E-4
Training loss
Variance
1E-1
MNIST nonconvex
0.11
0.105
0.1
0.095
0.09
1E-5
0
100
#grad / n
200
CIFAR10 nonconvex
SGD-best
SVRG
1.6
SGD-best
SVRG
Training loss
MNIST nonconvex
1E+0
1.55
1.5
1.45
1.4
0
100
#grad / n
200
0
200
400
#grad / n
Test error rate
Figure 3: More convex-case results. Loss residual P (w) ? P (w? ) (top) and test error rates (down). L2regularized logistic regression (10-class for CIFAR-10 and binary for the rest).
CIFAR10 nonconvex
SGD-best
SVRG
0.52
0.51
0.5
0.49
0.48
0.47
0.46
0.45
0
200
400
#grad / n
Figure 4: Neural net results (nonconvex).
neural net studies; ? was set to 1e-4 and 1e-3, respectively. In Fig. 4 we confirm that the results are
similar to the convex case; i.e., SVRG reduces the variance and smoothly converges faster than the
best-tuned SGD with learning rate scheduling, which is a de facto standard method for neural net
training. As said earlier, methods such as SDCA and SAG are not practical for neural nets due to
their memory requirement. We view these results as promising. However, further investigation, in
particular with larger/deeper neural nets for which training cost is a critical issue, is still needed.
6
Conclusion
This paper introduces an explicit variance reduction method for stochastic gradient descent methods. For smooth and strongly convex functions, we prove that this method enjoys the same fast
convergence rate as those of SDCA and SAG. However, our proof is significantly simpler and more
intuitive. Moreover, unlike SDCA or SAG, this method does not require the storage of gradients, and
thus is more easily applicable to complex problems such as structured prediction or neural network
learning.
Acknowledgment
We thank Leon Bottou and Alekh Agarwal for spotting a mistake in the original theorem.
8
References
C.J. Hsieh, K.W. Chang, C.J. Lin, S.S. Keerthi, and S. Sundararajan. A dual coordinate descent
method for large-scale linear SVM. In ICML, pages 408?415, 2008.
Nicolas Le Roux, Mark Schmidt, and Francis Bach. A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets. arXiv
preprint arXiv:1202.6258, 2012.
Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston, 2004.
Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic
average gradient. arXiv preprint arXiv:1309.2388, 2013.
S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for
SVM. In International Conference on Machine Learning, pages 807?814, 2007.
Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized
loss minimization. arXiv preprint arXiv:1209.1873, 2012.
T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the Twenty-First International Conference on Machine Learning, 2004.
9
| 4937 |@word mild:1 version:2 advantageous:1 hsieh:2 pick:2 sgd:72 kwm:1 minus:1 reduction:13 initial:1 practiced:1 tuned:8 past:1 ka:1 com:1 activation:1 drop:1 update:16 half:1 selected:1 indicative:2 provides:4 consulting:1 iterates:1 node:2 toronto:1 org:1 simpler:5 zhang:11 baidu:1 kak22:1 prove:3 introductory:1 introduce:1 expected:1 solver:1 lib:1 becomes:2 project:1 moreover:3 bounded:1 underlying:1 impractical:1 every:1 sag:9 oscillates:1 k2:1 facto:1 yn:1 local:3 mistake:1 china:1 mentioning:1 practical:3 lecun:1 acknowledgment:1 procedure:3 sdca:33 significantly:4 protein:4 get:1 pegasos:1 cannot:1 close:2 scheduling:6 storage:4 www:2 go:6 starting:1 convex:40 roux:9 simplicity:2 rule:11 insight:4 regarded:1 coordinate:5 us:5 expensive:1 labeled:2 bottom:1 csie:1 preprint:3 connected:2 trade:1 decrease:4 intuition:2 convexity:2 complexity:1 nesterov:2 trained:1 depend:1 solving:1 predictive:1 division:1 compactly:1 easily:3 accelerate:2 jersey:1 various:1 forced:1 fast:8 approached:1 shalev:10 larger:4 gi:12 itself:1 sequence:2 advantage:2 net:8 propose:1 realization:1 achieve:3 intuitive:5 webpage:1 convergence:26 requirement:2 optimum:6 converges:1 derive:1 measured:1 strong:1 implemented:1 auxiliary:1 c:2 implies:3 come:1 saved:1 stochastic:19 libsvmtools:1 require:4 fix:1 ntu:1 investigation:1 hold:1 sufficiently:1 exp:1 algorithmic:1 kvt:3 lm:4 outperformed:1 applicable:2 minimization:1 clearly:1 always:1 pn:3 cornell:1 improvement:1 contrast:1 rigorous:1 sense:2 hidden:1 issue:3 dual:9 arg:2 overall:1 html:2 development:1 special:1 initialize:1 softmax:1 equal:1 saving:1 never:1 sampling:1 identical:1 represents:1 kw:2 icml:1 report:1 inherent:1 employ:1 abt:1 randomly:5 kwt:5 keerthi:1 maintain:2 n1:1 evaluation:1 adjust:2 weakness:1 introduces:3 primal:2 accurate:1 cifar10:4 minw:2 respective:1 initialized:1 desired:1 re:1 theoretical:2 earlier:1 cover:2 disadvantage:1 cost:3 predictor:1 johnson:1 mnist1:1 international:2 stay:3 preferring:1 off:1 connecting:1 quickly:1 squared:1 choose:3 slowly:1 derivative:3 de:1 includes:1 inc:1 explicitly:2 caused:1 depends:1 performed:1 view:4 analyze:2 francis:2 wm:1 start:3 option:7 competitive:3 shai:1 contribution:1 square:3 accuracy:2 variance:39 yield:1 bbt:1 confirmed:1 researcher:1 worth:1 randomness:1 history:1 explain:1 reach:1 checked:1 frequency:1 proof:3 popular:3 improves:1 rie:1 formulation:1 done:1 strongly:10 stage:3 overlapping:1 logistic:6 usa:2 effect:2 k22:26 normalized:1 remedy:2 regularization:3 kriz:1 generalized:1 exdb:1 recently:1 sigmoid:1 exponentially:1 approximates:1 kluwer:1 sundararajan:1 bk22:1 rd:2 alekh:1 gt:6 recent:2 store:1 certain:1 nonconvex:16 inequality:9 binary:3 vt:3 yi:4 minimum:4 additional:3 converge:4 ii:4 full:2 multiple:1 rj:1 reduces:2 smooth:8 match:1 faster:6 bach:2 long:2 cifar:7 lin:1 divided:1 prediction:6 regression:9 basic:1 expectation:8 rutgers:1 arxiv:6 iteration:10 agarwal:1 achieved:1 proposal:1 interval:1 rest:2 unlike:3 ascent:4 pass:1 legend:1 effectiveness:1 call:2 intermediate:2 split:2 iterate:2 reduce:1 idea:3 multiclass:2 intensive:1 grad:15 accelerating:1 proceed:1 deep:2 generally:1 useful:2 locally:1 ten:1 reduced:5 http:4 dotted:1 estimated:3 per:2 nevertheless:1 libsvm:1 asymptotically:3 sum:1 beijing:1 package:1 noticing:1 inverse:1 throughout:1 almost:1 yann:1 draw:2 summarizes:1 comparable:1 bound:2 layer:1 strength:1 lkw:1 min:4 argument:2 leon:3 performing:1 rcv1:3 relatively:8 structured:3 according:1 combination:1 poor:1 smaller:1 slightly:1 tw:1 modification:1 gradually:1 ln:4 computationally:1 previously:1 kw0:1 eventually:1 cjlin:1 needed:2 know:2 singer:1 end:2 operation:1 apply:3 observe:1 away:1 save:1 batch:3 encounter:1 schmidt:3 slower:2 original:1 top:2 running:1 include:1 ensure:1 completed:1 standardized:1 maintaining:1 unsuitable:1 objective:2 said:1 gradient:43 thank:1 w0:9 consumption:1 relationship:1 mini:1 minimizing:1 nc:2 slows:2 design:1 implementation:1 twenty:1 snapshot:1 datasets:3 finite:2 descent:14 defining:1 y1:1 proceeds:1 below:1 spotting:1 including:1 memory:3 explanation:2 suitable:2 critical:1 natural:2 regularized:5 residual:6 improve:1 axis:1 geometric:2 l2:3 multiplication:1 loss:25 fully:1 lecture:1 srebro:1 versus:1 storing:1 course:1 summary:1 surprisingly:1 svrg:44 enjoys:2 allow:1 deeper:1 taking:1 xn:1 made:1 commonly:1 approximate:2 ignore:1 keep:2 confirm:2 summing:2 xi:8 shwartz:10 alternatively:1 kddcup:1 why:1 promising:1 nicolas:2 bottou:3 complex:4 n2:1 fair:1 x1:1 fig:7 referred:2 ny:1 tong:2 slow:3 sub:2 kbk22:1 explicit:5 exponential:3 third:1 down:6 theorem:6 decay:4 svm:3 experimented:1 covtype:2 mnist:9 conditioned:2 illustrates:1 easier:1 boston:1 smoothly:2 scalar:1 chang:1 applies:1 relies:1 goal:1 wt:22 pas:1 mark:2 |
4,351 | 4,938 | Accelerated Mini-Batch Stochastic Dual Coordinate
Ascent
Shai Shalev-Shwartz
School of Computer Science and Engineering
Hebrew University, Jerusalem, Israel
Tong Zhang
Department of Statistics
Rutgers University, NJ, USA
Abstract
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving
regularized loss minimization problems in machine learning. This paper considers
an extension of SDCA under the mini-batch setting that is often used in practice.
Our main contribution is to introduce an accelerated mini-batch version of SDCA
and prove a fast convergence rate for this method. We discuss an implementation
of our method over a parallel computing system, and compare the results to both
the vanilla stochastic dual coordinate ascent and to the accelerated deterministic
gradient descent method of Nesterov [2007].
1
Introduction
We consider the following generic optimization problem. Let ?1 , . . . , ?n be a sequence of vector
convex functions from Rd to R, and let g : Rd ? R be a strongly convex regularization function.
Our goal is to solve minx?Rd P (x) where
" n
#
1X
P (x) =
?i (x) + g(x) .
(1)
n i=1
For example, given a sequence of n training examples (v1 , y1 ), . . . , (vn , yn ), where vi ? Rd and
yi ? R, ridge regression is obtained by setting g(x) = ?2 kxk2 and ?i (x) = (x> vi ? yi )2 . Regularized logistic regression is obtained by setting ?i (x) = log(1 + exp(?yi x> vi )).
The dual problem of (1) is defined as follows: For each i, let ??i : Rd ? R be the convex conjugate
of ?i , namely, ??i (u) = maxz?Rd (z > u ? ?i (z)). Similarly, let g ? be the convex conjugate of g. The
dual problem is:
" n
!#
n
X
1X
?
?
1
max D(?) where D(?) =
??i (??i ) ? g
?i
,
(2)
n
n i=1
??Rd?n
i=1
where for each i, ?i is the i?th column of the matrix ?.
The dual objective has a different dual vector associated with each primal function. Dual Coordinate
Ascent (DCA) methods solve the dual problem iteratively, where at each iteration of DCA, the dual
objective is optimized with respect to a single dual vector, while the rest of the dual vectors are
kept in tact. Recently, Shalev-Shwartz and Zhang [2013a] analyzed a stochastic version of dual
coordinate ascent, abbreviated by SDCA, in which at each round we choose which dual vector to
optimize uniformly at random (see also Richt?arik and Tak?ac? [2012a]). In particular, let x? be the
optimum of (1). We say that a solution x is -accurate if P (x) ? P (x? ) ? . Shalev-Shwartz and
Zhang [2013a] have derived the following convergence guarantee for SDCA: If g(x) = ?2 kxk22 and
each ?i is (1/?)-smooth, then for every > 0, if we run SDCA for at least
1
1
log((n + ??
) ? 1 )
n + ??
1
iterations, then the solution of the SDCA algorithm will be -accurate (in expectation). This convergence rate is significantly better than the more commonly studied stochastic gradient descent (SGD)
methods that are related to SDCA1 .
Another approach to solving (1) is deterministic gradient descent methods. In particular, Nesterov
[2007] proposed an accelerated gradient descent (AGD) method for solving (1). Under the same
conditions mentioned above, AGD finds an -accurate solution after performing
1
O ? log( 1 )
??
iterations.
The advantage of SDCA over AGD is that each iteration involves only a single dual vector and
usually costs O(d). In contrast, each iteration of AGD requires ?(nd) operations. On the other
hand, AGD has a better?dependence on the condition number of the problem ? the iteration bound
of AGD scales with 1/ ?? while the iteration bound of SDCA scales with 1/(??).
In this paper we describe and analyze a new algorithm that interpolates between SDCA and AGD.
At each iteration of the algorithm, we randomly pick a subset of m indices from {1, . . . , n} and
update the dual vectors corresponding to this subset. This subset is often called a mini-batch. The
use of mini-batches is common with SGD optimization, and it is beneficial when the processing time
of a mini-batch of size m is much smaller than m times the processing time of one example (minibatch of size 1). For example, in the practical training of neural networks with SGD, one is always
advised to use mini-batches because it is more efficient to perform matrix-matrix multiplications
over a mini-batch than an equivalent amount of matrix-vector multiplication operations (each over
a single training example). This is especially noticeable when GPU is used: in some cases the
processing time of a mini-batch of size 100 may be the same as that of a mini-batch of size 10.
Another typical use of mini-batch is for parallel computing, which was studied by various authors
for stochastic gradient descent (e.g., Dekel et al. [2012]). This is also the application scenario we
have in mind, and will be discussed in greater details in Section 3.
Recently, Tak?ac et al. [2013] studied mini-batch variants of SDCA in the context of the Support
Vector Machine (SVM) problem. They have shown that the naive mini-batching method, in which
m dual variables are optimized in parallel, might actually increase the number of iterations required.
They then describe several ?safe? mini-batching schemes, and based on the analysis of ShalevShwartz and Zhang [2013a], have shown several speed-up results. However, their results are for the
non-smooth case and hence they do not obtain linear convergence rate. In addition, the speed-up
they obtain requires some spectral properties of the training examples. We take a different approach
and employ Nesterov?s acceleration method, which has previously been applied to mini-batch SGD
optimization. This paper shows how to achieve acceleration for SDCA in the mini-batch setting. The
pseudo code of our Accelerated Mini-Batch SDCA, abbreviated by ASDCA, is presented below.
Procedure Accelerated Mini-Batch SDCA
Parameters scalars ?, ? and ? ? [0, 1] ; mini-batch size m
(0)
(0)
Initialize ?1 = ? ? ? = ?n = ?
? (t) = 0, x(0) = 0
Iterate: for t = 1, 2, . . .
u(t?1) = (1 ? ?)x(t?1) + ??g ? (?
?(t?1) )
Randomly pick subset I ? {1, . . . , n} of size m and update the dual variables in I
(t)
(t?1)
?i = (1 ? ?)?i
? ???i (u(t?1) ) for i ? I
(t)
(t?1)
?j = ?j
for j ?
/I
P
(t)
(t?1)
(t)
(t?1)
?1
?
? =?
?
+n
)
i?I (?i ? ?i
(t)
(t?1)
?
(t)
x = (1 ? ?)x
+ ??g (?
? )
end
In the next section we present our main result ? an analysis of the number of iterations required
by ASDCA. We focus on the case of Euclidean regularization, namely, g(x) = ?2 kxk2 . Analyzing
more general strongly convex regularization functions is left for future work. In Section 3 we discuss
1
An exception is the recent analysis given in Le Roux et al. [2012] for a variant of SGD.
2
parallel implementations of ASDCA and compare it to parallel implementations of AGD and SDCA.
In particular, we explain in which regimes ASDCA can be better than both AGD and SDCA. In
Section 4 we present some experimental results, demonstrating how ASDCA interpolates between
AGD and SDCA. The proof of our main theorem is differed to a long version of this paper (ShalevShwartz and Zhang [2013b]). We conclude with a discussion of our work in light of related works
in Section 5.
2
Main Results
Our main result is a bound on the number of iterations required by ASDCA to find an -accurate
solution. In our analysis, we only consider the squared Euclidean norm regularization,
?
kxk2 ,
2
where k ? k is the Euclidean norm and ? > 0 is a regularization parameter. The analysis for general
?-strongly convex regularizers is left for future work. For the squared Euclidean norm we have
g(x) =
1
1
k?k2
and
?g ? (?) = ? .
2?
?
We further assume that each ?i is 1/?-smooth with respect to k ? k, namely,
g ? (?) =
?x, z, ?i (x) ? ?i (z) + ??i (z)> (x ? z) +
1
kx ? zk2 .
2?
For example, if ?i (x) = (x> vi ? yi )2 , then it is kvi k2 -smooth.
The smoothness of ?i also implies that ??i (?) is ?-strongly convex:
?? ? [0, 1], ??i ((1 ? ?)? + ??) ? (1 ? ?)??i (?) + ???i (?) ?
?(1 ? ?)?
k? ? ?k2 .
2
We have the following result for our method.
1
kxk22 and for each i, ?i is (1/?)-smooth w.r.t. the Euclidean
Theorem 1. Assume that g(x) = 2?
norm. Suppose that the ASDCA algorithm is run with parameters ?, ?, m, ?, where
)
( r
1
??n
(??n)2/3
.
(3)
? ? min 1 ,
, ??n ,
4
m
m1/3
Define the dual sub-optimality by ?D(?) = D(?? ) ? D(?), where ?? is the optimal dual solution,
and the primal sub-optimality by ?P (x) = P (x) ? D(?? ). Then,
m E ?P (x(t) ) + n E ?D(?(t) ) ? (1 ? ?m/n)t [m?P (x(0) ) + n?D(?(0) )].
It follows that after performing
t?
n/m
log
?
m?P (x(0) ) + n?D(?(0) )
m
iterations, we have that E[P (x(t) ) ? D(?(t) )] ? .
Let us now discuss the bound, assuming ? is taken to be the right-hand side of (3). The dominating
factor of the bound on t becomes
r
n
n
m
1
m1/3
=
? max 1 ,
,
,
(4)
m?
m
??n ??n (??n)2/3
s
(
)
n
n/m 1/m
n1/3
= max
,
,
,
.
(5)
m
??
??
(??m)2/3
Table 1 summarizes several interesting cases, and compares the iteration bound of ASDCA to the
iteration bound of the vanilla SDCA algorithm (as analyzed in Shalev-Shwartz and Zhang [2013a])
3
Algorithm
SDCA
ASDCA
AGD
??n = ?(1)
n
?
n/ m
?
n
??n = ?(1/m)
nm
n
?
nm
??n = ?(m)
n
n/m
p
n/m
Table 1: Comparison of Iteration Complexity
Algorithm
SDCA
ASDCA
AGD
??n = ?(1)
n
?
n m
?
n n
??n = ?(1/m)
nm
nm
?
n nm
??n = ?(m)
n
pn
n n/m
Table 2: Comparison of Number of Examples Processed
and the Accelerated Gradient Descent (AGD) algorithm of Nesterov [2007]. In the table, we ignore
constants and logarithmic factors.
As can be seen in the table, the ASDCA algorithm interpolates between SDCA and AGD. In particular, ASDCA has the same bound as SDCA when m = 1 and the same bound as AGD when
m = n. Recall that the cost of each iteration of AGD scales with n while the cost of each iteration
of SDCA does not scale with n. The cost of each iteration of ASDCA scales with m. To compensate
for the difference cost per iteration for different algorithms, we may also compare the complexity
in terms of the number of examples processed (see Table 2). This is also what we will study in
our empirical experiments. It should be mentioned that this comparison is meaningful in a single
processor environment, but not in a parallel computing environment when multiple examples can be
processed simultaneously in a minibatch. In the next section we discuss under what conditions the
overall runtime of ASDCA is better than both AGD and SDCA.
3
Parallel Implementation
In recent years, there has been a lot of interest in implementing optimization algorithms using a
parallel computing architecture (see Section 5). We now discuss how to implement AGD, SDCA,
and ASDCA when having a computing machine with s parallel computing nodes.
In the calculations below, we use the following facts:
? If each node holds a d-dimensional vector, we can compute the sum of these vectors in time
O(d log(s)) by applying a ?tree-structure? summation (see for example the All-Reduce
architecture in Agarwal et al. [2011]).
? A node can broadcast a message with c bits to all other nodes in time O(c log2 (s)). To
see this, order nodes on the corners of the log2 (s)-dimensional hypercube. Then, at each
iteration, each node sends the message to its log(s) neighbors (namely, the nodes whose
code word is at a hamming distance of 1 from the node). The message between the furthest
away nodes will pass after log(s) iterations. Overall, we perform log(s) iterations and each
iteration requires transmitting c log(s) bits.
? All nodes can broadcast a message with c bits to all other nodes in time O(cs log2 (s)). To
see this, simply apply the broadcasting of the different nodes mentioned above in parallel.
The number of iterations will still be the same, but now, at each iteration, each node should
transmit cs bits to its log(s) neighbors. Therefore, it takes O(cs log2 (s)) time.
For concreteness of the discussion, we consider problems in which ?i (x) takes the form of
`(x> vi , yi ), where yi is a scalar and vi ? Rd . This is the case in supervised learning of linear
predictors (e.g. logistic regression or ridge regression). We further assume that the average number
? In very large-scale problems, a single machine cannot hold all of
of non-zero elements of vi is d.
the data in its memory. However, we assume that a single node can hold a fraction of 1/s of the data
in its memory.
4
Let us now discuss parallel implementations of the different algorithms starting with deterministic
gradient algorithms (such as AGD). The bottleneck operation of deterministic gradient algorithms is
the calculation of the gradient. In the notation mentioned above, this amounts to performing order
of nd? operations. If the data is distributed over s computing nodes, where each node holds n/s
? + d log(s)) as follows. First, each node
examples, we can calculate the gradient in time O(nd/s
?
calculates the gradient over its own n/s examples (which takes time O(nd/s)).
Then, the s resulting
d
vectors in R are summed up in time O(d log(s)).
Next, let us consider the SDCA algorithm. On a single computing node, it was observed that SDCA
is much more efficient than deterministic gradient descent methods, since each iteration of SDCA
? while each iteration of AGD costs ?(nd).
? When we have s nodes, for the SDCA
costs only ?(d)
algorithm, dividing the examples into s computing nodes does not yield any speed-up. However,
we can divide the features into the s nodes (that is, each node will hold d/s of the features for all
2
>
?
of the examples). This enables
Pthe computation of x vi in (expected) time of O(d/s + s log (s)).
Indeed, node t will calculate j?Jt xj vi,j , where Jt is the set of features stored in node t (namely,
|Jt | = d/s). Then, each node broadcasts the resulting scalar to all the other nodes. Note that we
?
will obtain a speed-up over the naive implementation only if s log2 (s) d.
For the ASDCA algorithm, each iteration involves the computation of the gradient over m examples.
We can choose to implement it by dividing the examples to the s nodes (as we did for AGD) or by
dividing the features into the s nodes (as we did for SDCA). In the first case, the cost of each iteration
? + d log(s)) while in the latter case, the cost of each iteration is O(md/s
? + ms log2 (s)).
is O(md/s
We will choose between these two implementations based on the relation between d, m, and s.
The runtime and communication time of each iteration is summarized in the table below.
Algorithm
partition type
runtime
communication time
SDCA
features
?
d/s
s log2 (s)
ASDCA
features
ms log2 (s)
ASDCA
examples
?
dm/s
?
dm/s
examples
?
dn/s
d log(s)
AGD
d log(s)
We again see that ASDCA nicely interpolates between SDCA and AGD. In practice, it is usually
the case that there is a non-negligible cost of opening communication channels between nodes. In
that case, it will be better to apply the ASDCA with a value of m that reflects an adequate tradeoff
between the runtime of each node and the communication time. With the appropriate value of m
(which depends on constants like the cost of opening communication channels and sending packets
of bits between nodes), ASDCA may outperform both SDCA and AGD.
4
Experimental Results
In this section we demonstrate how ASDCA interpolates between SDCA and AGD. All of our
experiments are performed for the task of binary classification with a smooth variant of the hingeloss (see Shalev-Shwartz and Zhang [2013a]). Specifically, let (v1 , y1 ), . . . , (vm , ym ) be a set of
labeled examples, where for every i, vi ? Rd and yi ? {?1}. Define ?i (x) to be
?
yi x> vi > 1
?0
>
?i (x) = 1/2 ? yi x vi
yi x> vi < 0
?1
>
2
o.w.
2 (1 ? yi x vi )
We also set the regularization function to be g(x) = ?2 kxk22 where ? = 1/n. This is the default
value for the regularization parameter taken in several optimization packages.
Following Shalev-Shwartz and Zhang [2013a], the experiments were performed on three large
datasets with very different feature counts and sparsity. The astro-ph dataset classifies abstracts
of papers from the physics ArXiv according to whether they belong in the astro-physics section;
5
astro-ph
CCAT
?1
cov1
?1
10
?1
10
10
?2
10
?3
?2
10
?3
10
5
6
10
7
10
6
10
10
7
9
6
10
10
0.16
0.15
0.14
0.11
0.13
Test Loss
0.12
0.1
0.11
0.08
0.1
0.07
0.09
0.06
0.08
0.37
0.35
5
10
6
10
7
0.3
0.29
10
7
0.11
10
9
6
10
10
#processed examples
0.15
m=3
m=30
m=299
AGD
SDCA
0.12
8
10
#processed examples
0.13
0.33
0.31
6
10
0.34
0.32
0.07
0.05
m=52
m=523
m=5229
AGD
SDCA
0.38
0.36
0.12
0.09
8
10
0.39
m=78
m=781
m=7813
AGD
SDCA
0.17
7
10
#processed examples
Test Loss
0.13
10
#processed examples
m=3
m=30
m=299
AGD
SDCA
0.14
8
10
#processed examples
0.15
?2
10
?3
10
10
Test Loss
m=52
m=523
m=5229
AGD
SDCA
Primal suboptimality
m=78
m=781
m=7813
AGD
SDCA
Primal suboptimality
Primal suboptimality
m=3
m=30
m=299
AGD
SDCA
m=78
m=781
m=7813
AGD
SDCA
0.14
0.13
7
10
8
10
#processed examples
m=52
m=523
m=5229
AGD
SDCA
0.32
0.31
0.3
0.12
0.1
0.08
0.07
Test Error
Test Error
Test Error
0.29
0.11
0.09
0.1
0.09
0.28
0.27
0.26
0.08
0.06
0.25
0.07
0.05
0.24
0.06
0.04
0.23
0.05
0.03
0.22
5
10
6
10
7
10
6
10
#processed examples
7
8
10
10
9
10
#processed examples
6
10
7
10
8
10
#processed examples
Figure 1: The figures presents the performance of AGD, SDCA, and ASDCA with different values
of mini-batch size, m. In all figures, the x axis is the number of processed examples. The three
columns are for the different datasets. Top: primal sub-optimality. Middle: average value of the
smoothed hinge loss function over a test set. Bottom: average value of the 0-1 loss over a test set.
CCAT is a classification task taken from the Reuters RCV1 collection; and cov1 is class 1 of the
covertype dataset of Blackard, Jock & Dean. The following table provides details of the dataset
characteristics.
Dataset
astro-ph
CCAT
cov1
Training Size
29882
781265
522911
Testing Size
32487
23149
58101
Features
99757
47236
54
Sparsity
0.08%
0.16%
22.22%
We ran ASDCA with values of m from the set {10?4 n, 10?3 n, 10?2 n}. We also ran the SDCA
algorithm and the AGD algorithm. In Figure 1 we depict the primal sub-optimality of the different
algorithms as a function of the number of examples processed. Note that each iteration of SDCA
processes a single example, each iteration of ASDCA processes m examples, and each iteration of
AGD processes n examples. As can be seen from the graphs, ASDCA indeed interpolates between
SDCA and AGD. It is clear from the graphs that SDCA is much better than AGD when we have a
single computing node. ASDCA performance is quite similar to SDCA when m is not very large.
As discussed in Section 3, when we have parallel computing nodes and there is a non-negligible cost
of opening communication channels between nodes, running ASDCA with an appropriate value of
m (which depends on constants like the cost of opening communication channels) may yield the
best performance.
6
5
Discussion and Related Work
We have introduced an accelerated version of stochastic dual coordinate ascent with mini-batches.
We have shown, both theoretically and empirically, that the resulting algorithm interpolates between
the vanilla stochastic coordinate descent algorithm and the accelerated gradient descent algorithm.
Using mini-batches in stochastic learning has received a lot of attention in recent years. E.g. ShalevShwartz et al. [2007] reported experiments showing that applying small mini-batches in Stochastic
Gradient Descent (SGD) decreases the required number of iterations. Dekel et al. [2012] and Agarwal and Duchi [2012] gave an analysis of SGD with mini-batches for smooth loss functions. Cotter
et al. [2011] studied SGD and accelerated versions of SGD with mini-batches and Tak?ac et al. [2013]
studied SDCA with mini-batches for SVMs. Duchi et al. [2010] studied dual averaging in distributed
networks as a function of spectral properties of the underlying graph. However, all of these methods
have a polynomial dependence on 1/, while we consider the strongly convex and smooth case in
which a log(1/) rate is achievable.2 Parallel coordinate descent has also been recently studied in
Fercoq and Richt?arik [2013], Richt?arik and Tak?ac? [2013].
It is interesting to note that most3 of these papers focus on mini-batches as the method of choice for
distributing SGD or SDCA, while ignoring the option to divide the data by features instead of by
examples. A possible reason is the cost of opening communication sockets as discussed in Section 3.
There are various practical considerations that one should take into account when designing a practical system for distributed optimization. We refer the reader, for example, to Dekel [2010], Low
et al. [2010, 2012], Agarwal et al. [2011], Niu et al. [2011].
The more general problem of distributed PAC learning has been studied recently in Daume III et al.
[2012], Balcan et al. [2012]. See also Long and Servedio [2011]. In particular, they obtain algorithms with O(log(1/)) communication complexity. However, these works consider efficient
algorithms only in the realizable case.
Acknowledgements: Shai Shalev-Shwartz is supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). Tong Zhang is supported by the following grants:
NSF IIS-1016061, NSF DMS-1007527, and NSF IIS-1250985.
References
Alekh Agarwal and John C Duchi. Distributed delayed stochastic optimization. In Decision and
Control (CDC), 2012 IEEE 51st Annual Conference on, pages 5451?5452. IEEE, 2012.
Alekh Agarwal, Olivier Chapelle, Miroslav Dud??k, and John Langford. A reliable effective terascale
linear learning system. arXiv preprint arXiv:1110.4198, 2011.
Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. arXiv preprint arXiv:1204.3514, 2012.
Joseph K Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. Parallel coordinate descent
for l1-regularized loss minimization. In ICML, 2011.
Andrew Cotter, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Better mini-batch algorithms
via accelerated gradient methods. arXiv preprint arXiv:1106.4574, 2011.
Hal Daume III, Jeff M Phillips, Avishek Saha, and Suresh Venkatasubramanian. Protocols for learning classifiers on distributed data. arXiv preprint arXiv:1202.6078, 2012.
Ofer Dekel. Distribution-calibrated hierarchical classification. In NIPS, 2010.
Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction
using mini-batches. The Journal of Machine Learning Research, 13:165?202, 2012.
2
It should be noted that one can use our results for Lipschitz functions as well by smoothing the loss function
(see Nesterov [2005]). By doing so, we can interpolate between the 1/2 rate of non-accelerated method and
the 1/ rate of accelerated gradient.
3
There are few exceptions in the context of stochastic coordinate descent in the primal. See for example
Bradley et al. [2011], Richt?arik and Tak?ac? [2012b]
7
John Duchi, Alekh Agarwal, and Martin J Wainwright. Distributed dual averaging in networks.
Advances in Neural Information Processing Systems, 23, 2010.
Olivier Fercoq and Peter Richt?arik. Smooth minimization of nonsmooth functions with parallel
coordinate descent methods. arXiv preprint arXiv:1309.5885, 2013.
Nicolas Le Roux, Mark Schmidt, and Francis Bach. A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets. arXiv
preprint arXiv:1202.6258, 2012.
Phil Long and Rocco Servedio. Algorithms and hardness results for parallel large margin learning.
In NIPS, 2011.
Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M
Hellerstein. Graphlab: A new framework for parallel machine learning. arXiv preprint
arXiv:1006.4990, 2010.
Yucheng Low, Danny Bickson, Joseph Gonzalez, Carlos Guestrin, Aapo Kyrola, and Joseph M
Hellerstein. Distributed graphlab: A framework for machine learning and data mining in the
cloud. Proceedings of the VLDB Endowment, 5(8):716?727, 2012.
Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103
(1):127?152, 2005.
Yurii Nesterov. Gradient methods for minimizing composite objective function, 2007.
Feng Niu, Benjamin Recht, Christopher R?e, and Stephen J Wright. Hogwild!: A lock-free approach
to parallelizing stochastic gradient descent. arXiv preprint arXiv:1106.5730, 2011.
Peter Richt?arik and Martin Tak?ac? . Iteration complexity of randomized block-coordinate descent
methods for minimizing a composite function. Mathematical Programming, pages 1?38, 2012a.
Peter Richt?arik and Martin Tak?ac? . Parallel coordinate descent methods for big data optimization.
arXiv preprint arXiv:1212.0873, 2012b.
Peter Richt?arik and Martin Tak?ac? . Distributed coordinate descent method for learning with big data.
arXiv preprint arXiv:1310.2059, 2013.
Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized
loss minimization. Journal of Machine Learning Research, 14:567?599, Feb 2013a.
Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent.
arxiv, 2013b.
Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal Estimated sub-GrAdient
SOlver for SVM. In ICML, pages 807?814, 2007.
Martin Tak?ac, Avleen Bijral, Peter Richt?arik, and Nathan Srebro. Mini-batch primal and dual methods for svms. arxiv, 2013.
8
| 4938 |@word middle:1 version:5 achievable:1 polynomial:1 norm:4 nd:5 dekel:5 vldb:1 pick:2 sgd:10 venkatasubramanian:1 bradley:2 danny:3 gpu:1 john:3 partition:1 enables:1 update:2 depict:1 bickson:3 intelligence:1 provides:1 node:34 zhang:11 mathematical:2 dn:1 prove:1 privacy:1 introduce:1 theoretically:1 indeed:2 hardness:1 expected:1 solver:1 becomes:1 classifies:1 notation:1 underlying:1 israel:1 what:2 nj:1 guarantee:1 pseudo:1 every:2 runtime:4 k2:3 classifier:1 control:1 grant:1 yn:1 negligible:2 engineering:1 analyzing:1 niu:2 advised:1 might:1 studied:8 practical:3 testing:1 practice:2 block:1 implement:2 procedure:1 suresh:1 sdca:51 empirical:1 significantly:1 composite:2 word:1 cannot:1 pegasos:1 context:2 applying:2 optimize:1 equivalent:1 deterministic:5 maxz:1 dean:1 phil:1 jerusalem:1 attention:1 starting:1 convex:9 bachrach:1 roux:2 coordinate:16 transmit:1 yishay:1 suppose:1 shamir:2 olivier:2 programming:2 designing:1 element:1 labeled:1 observed:1 bottom:1 cloud:1 preprint:10 calculate:2 richt:9 decrease:1 ran:3 mentioned:4 benjamin:1 environment:2 complexity:5 nesterov:7 solving:3 various:2 fast:1 effective:2 describe:2 shalev:10 whose:1 quite:1 solve:2 dominating:1 say:1 statistic:1 online:1 sequence:2 advantage:1 pthe:1 achieve:1 convergence:5 optimum:1 andrew:1 ac:9 school:1 received:1 noticeable:1 dividing:3 c:3 involves:2 implies:1 safe:1 stochastic:16 packet:1 implementing:1 summation:1 extension:1 hold:5 wright:1 exp:1 reflects:1 cotter:2 minimization:5 always:1 arik:9 pn:1 derived:1 focus:2 maria:1 kyrola:3 contrast:1 realizable:1 relation:1 tak:9 overall:2 dual:26 classification:3 smoothing:1 summed:1 initialize:1 having:1 nicely:1 icml:2 future:2 nonsmooth:1 employ:1 opening:5 saha:1 randomly:2 few:1 simultaneously:1 interpolate:1 delayed:1 n1:1 karthik:1 interest:1 message:4 mining:1 analyzed:2 light:1 primal:10 regularizers:1 accurate:4 ohad:2 tree:1 euclidean:5 divide:2 miroslav:1 column:2 bijral:1 cost:14 subset:4 predictor:1 stored:1 reported:1 calibrated:1 st:1 recht:1 randomized:1 vm:1 physic:2 ym:1 transmitting:1 squared:2 again:1 nm:5 choose:3 broadcast:3 corner:1 account:1 avishek:1 summarized:1 vi:14 depends:2 performed:2 hogwild:1 lot:2 analyze:1 doing:1 francis:1 option:1 parallel:18 carlos:3 shai:6 contribution:1 collaborative:1 characteristic:1 yield:2 socket:1 ccat:3 processor:1 explain:1 servedio:2 dm:3 cov1:3 associated:1 proof:1 hamming:1 dataset:4 recall:1 actually:1 dca:2 supervised:1 strongly:6 langford:1 hand:2 christopher:1 minibatch:2 logistic:2 icri:1 hal:1 usa:1 regularization:7 hence:1 dud:1 iteratively:1 round:1 noted:1 suboptimality:3 m:2 ridge:2 demonstrate:1 duchi:4 l1:1 balcan:2 consideration:1 recently:4 common:1 empirically:1 discussed:3 belong:1 m1:2 refer:1 phillips:1 smoothness:1 rd:9 vanilla:3 similarly:1 chapelle:1 alekh:3 feb:1 own:1 recent:3 scenario:1 binary:1 yi:11 seen:2 guestrin:3 greater:1 stephen:1 ii:2 multiple:1 smooth:11 calculation:2 bach:1 long:3 compensate:1 lin:1 calculates:1 prediction:1 variant:3 regression:4 florina:1 aapo:3 expectation:1 rutgers:1 jock:1 arxiv:23 asdca:28 iteration:36 agarwal:6 gilad:1 addition:1 tact:1 fine:1 sends:1 rest:1 ascent:8 sridharan:1 iii:2 iterate:1 xj:1 gave:1 architecture:2 reduce:1 tradeoff:1 bottleneck:1 whether:1 distributing:1 hingeloss:1 peter:5 interpolates:7 adequate:1 clear:1 amount:2 ph:3 processed:14 svms:2 outperform:1 nsf:3 estimated:1 per:1 demonstrating:1 blum:1 kept:1 v1:2 graph:3 concreteness:1 fraction:1 year:2 sum:1 run:2 package:1 reader:1 vn:1 gonzalez:2 decision:1 summarizes:1 bit:5 bound:9 annual:1 covertype:1 nathan:3 speed:4 min:1 optimality:4 fercoq:2 performing:3 rcv1:1 martin:5 department:1 according:1 conjugate:2 beneficial:1 smaller:1 joseph:5 taken:3 previously:1 discus:6 abbreviated:2 count:1 singer:1 mind:1 end:1 zk2:1 sending:1 yurii:2 ofer:2 operation:4 apply:2 hierarchical:1 away:1 generic:1 spectral:2 appropriate:2 batching:2 hellerstein:2 batch:29 schmidt:1 top:1 running:1 log2:8 hinge:1 lock:1 yoram:1 especially:1 hypercube:1 feng:1 objective:3 rocco:1 dependence:2 md:2 gradient:21 minx:1 distance:1 astro:4 considers:1 reason:1 furthest:1 assuming:1 code:2 index:1 mini:31 minimizing:2 hebrew:1 implementation:7 perform:2 datasets:2 finite:1 descent:18 communication:10 y1:2 mansour:1 smoothed:1 parallelizing:1 introduced:1 namely:5 required:4 optimized:2 nip:2 yucheng:2 usually:2 below:3 regime:1 sparsity:2 max:3 memory:2 reliable:1 wainwright:1 regularized:4 scheme:1 kxk22:3 axis:1 naive:2 acknowledgement:1 multiplication:2 loss:10 cdc:1 interesting:2 srebro:3 xiao:1 terascale:1 endowment:1 supported:2 free:1 side:1 institute:1 neighbor:2 distributed:11 default:1 author:1 commonly:1 collection:1 agd:39 ignore:1 blackard:1 graphlab:2 conclude:1 shwartz:10 table:8 channel:4 nicolas:1 ignoring:1 protocol:1 did:2 main:5 reuters:1 big:2 daume:2 intel:1 differed:1 tong:4 sub:5 exponential:1 kxk2:3 shalevshwartz:3 theorem:2 jt:3 kvi:1 showing:1 pac:1 svm:2 avrim:1 ci:1 kx:1 margin:1 logarithmic:1 broadcasting:1 simply:1 scalar:3 avleen:1 goal:1 acceleration:2 jeff:1 lipschitz:1 typical:1 specifically:1 uniformly:1 averaging:2 called:1 pas:1 experimental:2 meaningful:1 exception:2 support:1 mark:1 latter:1 accelerated:14 |
4,352 | 4,939 | Estimation, Optimization, and Parallelism when
Data is Sparse
H. Brendan McMahan2
Google, Inc.2
Seattle, WA 98103
[email protected]
John C. Duchi1,2
Michael I. Jordan1
University of California, Berkeley1
Berkeley, CA 94720
{jduchi,jordan}@eecs.berkeley.edu
Abstract
We study stochastic optimization problems when the data is sparse, which is in
a sense dual to current perspectives on high-dimensional statistical learning and
optimization. We highlight both the difficulties?in terms of increased sample
complexity that sparse data necessitates?and the potential benefits, in terms of
allowing parallelism and asynchrony in the design of algorithms. Concretely, we
derive matching upper and lower bounds on the minimax rate for optimization
and learning with sparse data, and we exhibit algorithms achieving these rates.
We also show how leveraging sparsity leads to (still minimax optimal) parallel
and asynchronous algorithms, providing experimental evidence complementing
our theoretical results on several medium to large-scale learning tasks.
1
Introduction and problem setting
In this paper, we investigate stochastic optimization problems in which the data is sparse. Formally,
let {F (?; ?), ? ? ?} be a collection of real-valued convex functions, each of whose domains contains the convex set X ? Rd . For a probability distribution P on ?, we consider the following
optimization problem:
Z
minimize f (x) := E[F (x; ?)] =
F (x; ?)dP (?).
(1)
x?X
?
By data sparsity, we mean the samples ? are sparse: assuming that samples ? lie in Rd , and defining
the support supp(x) of a vector x to the set of indices of its non-zero components, we assume
supp ?F (x; ?) ? supp ?.
(2)
The sparsity condition (2) means that F (x; ?) does not ?depend? on the values of xj for indices j
such that ?j = 0.1 This type of data sparsity is prevalent in statistical optimization problems and
machine learning applications; in spite of its prevalence, study of such problems has been limited.
As a motivating example, consider a text classification problem: data ? ? Rd represents words
appearing in a document, and we wish to minimize a logistic loss F (x; ?) = log(1 + exp(h?, xi))
on the data (we encode the label implicitly with the sign of ?). Such generalized linear models satisfy
the sparsity condition (2), and while instances are of very high dimension, in any given instance, very
few entries of ? are non-zero [8]. From a modelling perspective, it thus makes sense to allow a dense
predictor x: any non-zero entry of ? is potentially relevant and important. In a sense, this is dual to
the standard approaches to high-dimensional problems; one usually assumes that the data ? may be
dense, but there are only a few relevant features, and thus a parsimonious model x is desirous [2]. So
1
Formally, if ?? denotes the coordinate projection zeroing all indices j of its argument where ?j = 0, then
F (?? (x); ?) = F (x; ?) for all x, ?. This follows from the first-order conditions for convexity [6].
1
while such sparse data problems are prevalent?natural language processing, information retrieval,
and other large data settings all have significant data sparsity?they do not appear to have attracted
as much study as their high-dimensional ?duals? of dense data and sparse predictors.
In this paper, we investigate algorithms and their inherent limitations for solving problem (1) under
natural conditions on the data generating distribution. Recent work in the optimization and machine
learning communities has shown that data sparsity can be leveraged to develop parallel (and even
asynchronous [12]) optimization algorithms [13, 14], but this work does not consider the statistical
effects of data sparsity. In another line of research, Duchi et al. [4] and McMahan and Streeter [9]
develop ?adaptive? stochastic gradient algorithms to address problems in sparse data regimes (2).
These algorithms exhibit excellent practical performance and have theoretical guarantees on their
convergence, but it is not clear if they are optimal?in that no algorithm can attain better statistical
performance?or whether they can leverage parallel computing as in the papers [12, 14].
In this paper, we take a two-pronged approach. First, we investigate the fundamental limits of
optimization and learning algorithms in sparse data regimes. In doing so, we derive lower bounds
on the optimization error of any algorithm for problems of the form (1) with sparsity condition (2).
These results have two main implications. They show that in some scenarios, learning with sparse
data is quite difficult, as essentially each coordinate j ? [d] can be relevant and must be optimized
for. In spite of this seemingly negative result, we are also able to show that the A DAG RAD algorithms
of [4, 9] are optimal, and we show examples in which their dependence on the dimension d can be
made exponentially better than standard gradient methods.
As the second facet of our two-pronged approach, we study how sparsity may be leveraged in parallel
computing frameworks to give substantially faster algorithms that still achieve optimal sample complexity in terms of the number of samples ? used. We develop two new algorithms, asynchronous
dual averaging (A SYNC DA) and asynchronous A DAG RAD (A SYNC A DAG RAD), which allow asynchronous parallel solution of the problem (1) for general convex f and X . Combining insights of
Niu et al.?s H OGWILD ! [12] with a new analysis, we prove our algorithms achieve linear speedup in
the number of processors while maintaining optimal statistical guarantees. We also give experiments
on text-classification and web-advertising tasks to illustrate the benefits of the new algorithms.
2
Minimax rates for sparse optimization
We begin our study of sparse optimization problems by establishing their fundamental statistical and
optimization-theoretic properties. To do this, we derive bounds on the minimax convergence rate
of any algorithm for such problems. Formally, let x
b denote any estimator for a minimizer of the
objective (1). We define the optimality gap ?N for the estimator x
b based on N samples ? 1 , . . . , ? N
from the distribution P as
?N (b
x, F, X , P ) := f (b
x) ? inf f (x) = EP [F (b
x; ?)] ? inf EP [F (x; ?)] .
x?X
x?X
This quantity is a random variable, since x
b is a random variable (it is a function of ? 1 , . . . , ? N ).
To define the minimax error, we thus take expectations of the quantity ?N , though we require a bit
more than simply E[?N ]. We let P denote a collection of probability distributions, and we consider
a collection of loss functions F specified by a collection F of convex losses F : X ? ? ? R. We
can then define the minimax error for the family of losses F and distributions P as
??N (X , P, F) := inf sup sup EP [?N (b
x(? 1:N ), F, X , P )],
x
b P ?P F ?F
(3)
where the infimum is taken over all possible estimators x
b (an estimator is an optimization scheme,
or a measurable mapping x
b : ?N ? X ) .
2.1
Minimax lower bounds
Let us now give a more precise characterization of the (natural) set of sparse optimization problems
we consider to provide the lower bound. For the next proposition, we let P consist of distributions
supported on ? = {?1, 0, 1}d , and we let pj := P (?j 6= 0) be the marginal probability of appearance of feature j ? {1, . . . , d}. For our class of functions, we set F to consist of functions F
satisfying the sparsity condition (2) and with the additional constraint that for g ? ?x F (x; ?), we
have that the jth coordinate |gj | ? Mj for a constant Mj < ?. We obtain
2
Proposition 1. Let the conditions of the preceding paragraph hold. Let R be a constant such that
X ? [?R, R]d . Then
?
d
pj
1 X
?
?N (X , P, F) ? R
.
Mj min pj , ?
8 j=1
N log 3
We provide the proof of Proposition 1 in the supplement A.1 in the full version of the paper, providing a few remarks here. We begin by giving a corollary to Proposition 1 that follows when the data
? obeys a type of power law: let p0 ? [0, 1], and assume that P (?j 6= 0) = p0 j ?? . We have
Corollary 2. Let ? ? 0. Let the conditions of Proposition 1 hold with Mj ? M for all j, and
assume the power law condition P (?j 6= 0) = p0 j ?? on coordinate appearance probabilities. Then
(1) If d > (p0 N )1/? ,
??N (X , P, F)
r
2??
1??
p0
2
p0 1??
MR
2?
?
(p0 N )
d
? (p0 N )
.
?1 +
?
8
2?? N
1??
(2) If d ? (p0 N )1/? ,
??N (X , P, F)
MR
?
8
r
p0
N
?
1
1
d1? 2 ?
1 ? ?/2
1 ? ?/2
.
Expanding Corollary 2 slightly, for simplicity assume the number of samples is large enough that
d ? (p0 N )1/? . Then we find that the lower bound on optimization error is of order
r
r
r
p0 1? ?
p0
p0
2
d
when ? < 2, M R
log d when ? ? 2, and M R
when ? > 2. (4)
MR
N
N
N
These results beg the question of tightness: are they improvable? As we see presently, they are not.
2.2
Algorithms for attaining the minimax rate
To show that the lower bounds of Proposition 1 and its subsequent specializations are sharp, we
review a few stochastic gradient algorithms. We begin with stochastic gradient descent (SGD): SGD
repeatedly samples ? ? P , computes g ? ?x F (x; ?), then performs the update x ? ?X (x ? ?g),
where ? is a stepsize parameter and ?X denotes Euclidean projection onto X . Standard analyses of
stochastic gradient descent [10] show that after N samples ? i , the SGD estimator x
b(N ) satisfies
Pd
1
R2 M ( j=1 pj ) 2
?
E[f (b
x(N ))] ? inf f (x) ? O(1)
,
(5)
x?X
N
where R2 denotes the ?2 -radius of X . Dual averaging, due to Nesterov [11] (sometimes called
?follow the regularized leader? [5]) is a more recent algorithm. In dual averaging, one again samples
g ? ?x F (x; ?), but instead of updating the parameter vector x one updates a dual vector z by
z ? z + g, then computes
1
x ? argmin hz, xi + ?(x) ,
?
x?X
2
where ?(x) is a strongly convex function defined over X (often one takes ?(x) = 21 kxk2 ). As
we discuss presently, the dual averaging algorithm is somewhat more natural in asynchronous and
parallel computing environments, and it enjoys the same type of convergence guarantees (5) as SGD.
The A DAG RAD algorithm [4, 9] is an extension of the preceding stochastic gradient methods. It
maintains a diagonal matrix S, where upon receiving a new sample ?, A DAG RAD performs the
following: it computes g ? ?x F (x; ?), then updates
Sj ? Sj + gj2 for j ? [d].
The dual averaging variant of A DAG RAD updates the usual dual vector z ? z + g; the update to x
is based on S and a stepsize ? and computes
1 D ? 1 ?E
?
2
x ,S x
.
x ? argmin hz, x i +
2?
x? ?X
3
After N samples ?, the averaged parameter x
b(N ) returned by A DAG RAD satisfies
d
R? M X ?
pj ,
E[f (b
x(N ))] ? inf f (x) ? O(1) ?
x?X
N j=1
(6)
where R? denotes the ?? -radius of X (cf. [4, Section 1.3 and Theorem 5]). By inspection, the
A DAG RAD rate (6) matches the lower bound in Proposition 1 and is thus optimal. It is interesting
to note, though, that in the power law setting of Corollary 2 (recall
order (4)), a calculation
? the error(1??)/2
shows that the multiplier for the SGD guarantee (5) becomes R? d max{d
, 1}, while A DA 1??/2
G RAD attains rate at worst R
max{d
,
log
d}.
For
?
>
1,
the
A
DAG RAD rate is no worse,
?
?
and for ? ? 2, is more than d/ log d better?an exponential improvement in the dimension.
3
Parallel and asynchronous optimization with sparsity
As we note in the introduction, recent works [12, 14] have suggested that sparsity can yield benefits
in our ability to parallelize stochastic gradient-type algorithms. Given the optimality of A DAG RADtype algorithms, it is natural to focus on their parallelization in the hope that we can leverage their
ability to ?adapt? to sparsity in the data. To provide the setting for our further algorithms, we
first revisit Niu et al.?s H OGWILD ! [12]. H OGWILD ! is an asynchronous (parallelized) stochastic
gradient algorithm for optimization over product-space domains, meaning that X in problem (1)
decomposes as X = X1 ? ? ? ? ? Xd , where Xj ? R. Fix a stepsize ? > 0. A pool of independently
running processors then performs the following updates asynchronously to a centralized vector x:
1. Sample ? ? P
2. Read x and compute g ? ?x F (x; ?)
3. For each j s.t. gj 6= 0, update xj ? ?Xj (xj ? ?gj ).
Here ?Xj denotes projection onto the jth coordinate of the domain X . The key of H OGWILD ! is that
in step 2, the parameter x is allowed to be inconsistent?it may have received partial gradient updates
from many processors?and for appropriate problems, this inconsistency is negligible. Indeed, Niu
et al. [12] show linear speedup in optimization time as the number of processors grow; they show
this empirically in many scenarios, providing a proof under the somewhat restrictive assumptions
that there is at most one non-zero entry in any gradient g and that f has Lipschitz gradients.
3.1
Asynchronous dual averaging
A weakness of H OGWILD ! is that it appears only applicable to problems for which the domain X
is a product space, and its analysis assumes kgk0 = 1 for all gradients g. In effort to alleviate these
difficulties, we now develop and present our asynchronous dual averaging algorithm, A SYNC DA.
A SYNC DA maintains and upates a centralized dual vector z instead of a parameter x, and a pool of
processors perform asynchronous updates to z, where each processor independently iterates:
n
o
1. Read z and compute x := argminx?X hz, xi + ?1 ?(x)
// Implicitly increment ?time?
counter t and let x(t) = x
2. Sample ? ? P and let g ? ?x F (x; ?) // Let g(t) = g.
3. For j ? [d] such that gj 6= 0, update zj ? zj + gj .
Because the actual computation of the vector x in A SYNC DA is performed locally on each processor
in step 1 of the algorithm, the algorithm can be executed with any proximal function ? and domain
X . The only communication point between any of the processors is the addition operation in step 3.
Since addition is commutative and associative, forcing all asynchrony to this point of the algorithm
is a natural strategy for avoiding synchronization problems.
In our analysis of A SYNC DA, and in our subsequent analysis of the adaptive methods, we require a
measurement of time elapsed. With that in mind, we let t denote a time index that exists (roughly)
behind-the-scenes. We let x(t) denote the vector x ? X computed in the tth step 1 of the A SYNC DA
4
algorithm, that is, whichever is the tth x actually computed by any of the processors. This
Pt quantity
exists and is recoverable from the algorithm, and it is possible to track the running sum ? =1 x(? ).
Additionally, we state two assumptions encapsulating the conditions underlying our analysis.
Assumption A. There is an upper bound m on the delay of any processor. In addition, for each
j ? [d] there is a constant pj ? [0, 1] such that P (?j 6= 0) ? pj .
We also require certain continuity (Lipschitzian) properties of the loss functions; these amount to a
second moment constraint on the instantaneous ?F and a rough measure of gradient sparsity.
Assumption B. There exist constants M and (Mj )dj=1 such that the following bounds hold for all
2
x ? X : E[k?x F (x; ?)k2 ] ? M2 and for each j ? [d] we have E[|?xj F (x; ?)|] ? pj Mj .
With these definitions, we have the following theorem, which captures the convergence behavior of
A SYNC DA under the assumption that X is a Cartesian product, meaning that X = X1 ? ? ? ? ? Xd ,
2
where Xj ? R, and that ?(x) = 12 kxk2 . Note the algorithm itself can still be efficiently parallelized
for more general convex X , even if the theorem does not apply.
Theorem 3. Let Assumptions A and B and the conditions in the preceding paragraph hold. Then
X
T
d
X
1
?
2
t
? t
E
F (x(t); ? ) ? F (x ; ? ) ?
p2j Mj2 .
kx? k2 + T M2 + ?T m
2?
2
t=1
j=1
We now provide a few remarks to explain and simplify the result. Under the more stringent condition
Pd
2
that |?xj F (x; ?)| ? Mj , Assumption A implies E[k?x F (x; ?)k2 ] ? j=1 pj Mj2 . Thus, for the
Pd
remainder of this section we take M2 = j=1 pj Mj2 , which upper bounds the Lipschitz continuity
constant of the objective function f . We then obtain the following corollary.
?
PT
Corollary 4. Define x
b(T ) = T1 t=1 x(t), and set ? = kx? k2 /M T . Then
E[f (b
x(T )) ? f (x? )] ?
d
kx? k X 2 2
M kx? k2
?
+ m ?2
pj M j .
T
2M T j=1
Corollary 4 is nearly immediate: since ? t is independent of x(t), we have E[F (x(t); ? t ) | x(t)] =
f (x(t)); applying Jensen?s inequality to f (b
x(T )) and performing an algebraic manipulation give the
result. If the data is suitably sparse, meaning that pj ? 1/m, the bound in Corollary 4 simplifies to
qP
d
?
2
?
3
3 M kx k2
j=1 pj Mj kx k2
?
?
?
=
,
(7)
E[f (b
x(T )) ? f (x )] ?
2
2
T
T
which is the convergence rate of stochastic gradient descent even in centralized
? settings (5). The
convergence guarantee (7) shows that after T timesteps, the error scales as 1/ T ; however, if we
have k processors, updates occur roughly k times as quickly, as they are asynchronous, and in time
scaling as N/k, we can evaluate N gradient samples: a linear speedup.
3.2
Asynchronous AdaGrad
We now turn to extending A DAG RAD to asynchronous settings, developing A SYNC A DAG RAD
(asynchronous A DAG RAD). As in the A SYNC DA algorithm, A SYNC A DAG RAD maintains a shared
dual vector z (the sum of gradients) and the shared matrix S, which is the diagonal sum of squares
of gradient entries (recall Section 2.2). The matrix S is initialized as diag(? 2 ), where ?j ? 0 is an
initial value. Each processor asynchronously performs the following iterations:
1
1
hx, Gxi}
1. Read S and z and set G = S 2 . Compute x := argminx?X {hz, xi + 2?
increment ?time? counter t and let x(t) = x, S(t) = S
2. Sample ? ? P and let g ? ?F (x; ?)
3. For j ? [d] such that gj 6= 0, update Sj ? Sj + gj2 and zj ? zj + gj .
5
// Implicitly
As in the description of A SYNC DA, we note that x(t) is the vector x ? X computed in the tth ?step?
of the algorithm (step 1), and similarly associate ? t with x(t).
To analyze A SYNC A DAG RAD, we make a somewhat stronger assumption on the sparsity properties
of the losses F than Assumption B.
Assumption C. There exist constants (Mj )dj=1 such that E[(?xj F (x; ?))2 | ?j 6= 0] ? Mj2 for all
x ? X.
P
2
Indeed, taking M2 =
j pj Mj shows that Assumption C implies Assumption B with specific
constants. We then have the following convergence result.
Theorem 5. In addition to the conditions of Theorem 3, let Assumption C hold. Assume that for all
j we have ? 2 ? Mj2 m and X ? [?R? , R? ]d . Then
T
X
t=1
E F (x(t); ? t ) ? F (x? ; ? t )
?
d
X
j=1
min
1 2
R E
? ?
"
2
? +
T
X
t=1
gj (t)
2
21 #
+ ?E
" T
X
t=1
gj (t)
2
21 #
(1 + pj m), Mj R? pj T .
It is possible to relax the condition on the initial constant diagonal term; we defer this to the full
version of the paper.
It is natural to ask in which situations the bound provided by Theorem 5 is optimal. We note that, as
in the case with Theorem 3, we may obtain a convergence rate for f (b
x(T )) ? f (x? ) using convexity,
P
T
where x
b(T ) = T1 t=1 x(t). By Jensen?s inequality, we have for any ? that
"
12
21 #
T
T
q
X
X
2
2
2
2
E[gj (t) ]
? ? +
? ? 2 + T pj Mj2 .
gj (t)
E ? +
t=1
t=1
For interpretation, let us now make a few assumptions on the probabilities pj . If we assume that
pj ? c/m for a universal (numerical) constant c, then Theorem 5 guarantees that
)
(p
X
d
log(T )/T + pj
1 2
?
?
E[f (b
x(T )) ? f (x )] ? O(1) R? + ?
(8)
, pj ,
Mj min
?
T
j=1
?
which is the convergence
p rate of A DAG RAD except for a small factor of min{ log T /T, pj } in
addition to the usual pj /T rate. In particular, optimizing by choosing ? = R? , and assuming
pj & T1 log T , we have convergence guarantee
?
d
X
pj
E[f (b
x(T )) ? f (x? )] ? O(1)R?
Mj min ? , pj ,
T
j=1
which is minimax optimal by Proposition 1.
In fact, however, the bounds of Theorem 5 are somewhat stronger: they provide bounds using the
expectation of the squared gradients gj (t) rather than the maximal value Mj , though the bounds are
perhaps clearer in the form (8). We note also that our analysis applies to more adversarial settings
than stochastic optimization (e.g., to online convex optimization [5]). Specifically, an adversary may
choose an arbitrary sequence of functions subject to the random data sparsity constraint (2), and our
results provide an expected regret bound, which is strictly stronger than the stochastic convergence
guarantees provided (and guarantees high-probability convergence in stochastic settings [3]). Moreover, our comments in Section 2 about the relative optimality of A DAG RAD versus standard gradient
methods apply. When the data is sparse, we indeed should use asynchronous algorithms, but using
adaptive methods yields even more improvement than simple gradient-based methods.
4
Experiments
In this section, we give experimental validation of our theoretical results on A SYNC A DAG RAD and
A SYNC DA, giving results on two datasets selected for their high-dimensional sparsity.2
2
In our experiments, A SYNC DA and H OGWILD ! had effectively identical performance.
6
8
0.07
6
5
4
0.024
Test error
Training loss
Speedup
0.025
0.065
7
0.06
0.055
0.05
0.045
0.04
0.023
0.022
0.021
0.02
0.035
3
0.019
2
1
2
4
0.03
A-A DAG RAD
A SYNC DA
Number of workers
6
8
10
12
14
0.018
0.025
0.02
16
2
4
6
8
10
12
14
Number of workers
0.017
16
2
4
6
8
10
12
14
Number of workers
16
Figure 1. Experiments with URL data. Left: speedup relative to one processor. Middle: training
dataset loss versus number of processors. Right: test set error rate versus number of processors. AA DAG RAD abbreviates A SYNC A DAG RAD.
1.03
1.02
1.01
1.00
1.0
1
2
4
8
16
64
256
number of passes
A-AdaGrad, ? = 0.008 L2 = 0
A-AdaGrad, ? = 0.008 L2 = 80
A-DA, ? = 0.8 L2 = 0
A-DA, ? = 0.8 L2 = 80
1.00
1.01
1.4
1.02
1.03
1.04
Impact of L2 regularizaton on test error
1.04
Fixed stepsizes, test data, L2=0
1.2
relative log-loss
1.6
1.8
Fixed stepsizes, training data, L2=0
A-AdaGrad ? = 0.002
A-AdaGrad ? = 0.004
A-AdaGrad ? = 0.008
A-AdaGrad ? = 0.016
A-DA ? = 0.800
A-DA ? = 1.600
A-DA ? = 3.200
1
2
4
8
16
32
number of passes
64
128
256
1
2
4
8
16
32
64
128
256
number of passes
Figure 2: Relative accuracy for various stepsize choices on an click-through rate prediction dataset.
4.1
Malicious URL detection
For our first set of experiments, we consider the speedup attainable by applying A SYNC A DAG RAD
and A SYNC DA, investigating the performance of each algorithm on a malicious URL prediction
task [7]. The dataset in this case consists of an anonymized collection of URLs labeled as malicious
(e.g., spam, phishing, etc.) or benign over a span of 120 days. The data in this case consists of
2.4 ? 106 examples with dimension d = 3.2 ? 106 (sparse) features. We perform several experiments,
randomly dividing the dataset into 1.2 ? 106 training and test samples for each experiment.
In Figure 1 we compare the performance of A SYNC A DAG RAD and A SYNC DA after doing after
single pass through the training dataset. (For each algorithm, we choose the stepsize ? for optimal
training set performance.) We perform the experiments on a single machine running Ubuntu Linux
with six cores (with two-way hyperthreading) and 32Gb of RAM. From the left-most plot in Fig. 1,
we see that up to six processors, both A SYNC DA and A SYNC A DAG RAD enjoy the expected linear
speedup, and from 6 to 12, they continue to enjoy a speedup that is linear in the number of processors
though at a lesser slope (this is the effect of hyperthreading). For more than 12 processors, there is
no further benefit to parallelism on this machine.
The two right plots in Figure 1 plot performance of the different methods (with standard errors)
versus the number of worker threads used. Both are essentially flat; increasing the amount of parallelism does nothing to the average training loss or the test error rate for either method. It is clear,
however, that for this dataset, the adaptive A SYNC A DAG RAD algorithm provides substantial performance benefits over A SYNC DA.
4.2
Click-through-rate prediction experiments
We also experiment on a proprietary datasets consisting of search ad impressions. Each example
corresponds to showing a search-engine user a particular text ad in response to a query string. From
this, we construct a very sparse feature vector based on the text of the ad displayed and the query
string (no user-specific data is used). The target label is 1 if the user clicked the ad and -1 otherwise.
7
(B) A-AdaGrad speedup
(D) Impact of training data ordering
1.004
1.005
1.006
1.007
1.008
1
2
4
8
16
32
number of passes
64
128
256
1.000
1
2
A-DA base ? = 1.600
A-AdaGrad base ? = 0.023
0
1.005
relative stepsize
(C) Optimal stepsize scaling
relative log-loss
1.003
target relative log-loss
1.005
1.010
1.002
1.010
1.015
8
4
0
speedup
A-DA ? = 1.600
A-AdaGrad ? = 0.016
1.001
1.000
relative log-loss
1.015
A-DA, L2=80
A-AdaGrad, L2=80
12
(A) Optimized stepsize for each number of passes
1
2
4
8
16
32
number of passes
64
128
256
1
2
4
8
16
32
64
128
256
number of passes
Figure 3. (A) Relative test-set log-loss for A SYNC DA and A SYNC A DAG RAD, choosing the best
stepsize (within a factor of about 1.4?) individually for each number of passes. (B) Effective speedup
for A SYNC A DAG RAD. (C) The best stepsize ?, expressed as a scaling factor on the stepsize used for
one pass. (D) Five runs with different random seeds for each algorithm (with ?2 penalty 80).
We fit logistic regression models using both A SYNC DA and A SYNC A DAG RAD. We run extensive
experiments on a moderate-sized dataset (about 107 examples, split between training and testing),
which allows thorough investigation of the impact of the stepsize ?, the number of training passes,3
and ?2 -regularization on accuracy. For these experiments we used 32 threads on 16 core machines
for each run, as A SYNC A DAG RAD and A SYNC DA achieve similar speedups from parallelization.
On this dataset, A SYNC A DAG RAD typically achieves an effective additional speedup over
A SYNC DA of 4? or more. That is, to reach a given level of accuracy, A SYNC DA generally needs
four times as many effective passes over the dataset. We measure accuracy with log-loss (the logistic loss) averaged over five runs using different random seeds (which control the order in which
the algorithms sample examples during training). We report relative values in Figures 2 and 3, that
is, the ratio of the mean loss for the given datapoint to the lowest (best) mean loss obtained. Our
results are not particularly sensitive to the choice of relative log-loss as the metric of interest; we
also considered AUC (the area under the ROC curve) and observed similar results.
Figure 2 shows relative log-loss as a function of the number of training passes for various stepsizes.
Without regularization, A SYNC A DAG RAD is prone to overfitting: it achieves significantly higher
accuracy on the training data (Fig. 2 (left)), but unless the stepsize is tuned carefully to the number
of passes, it will overfit (Fig. 2 (middle)). Fortunately, the addition of ?2 regularization largely solves
this problem. Indeed, Figure 2 (right) shows that while adding an ?2 penalty of 80 has very little
impact on A SYNC DA, it effectively prevents the overfitting of A SYNC A DAG RAD.4
Fixing ??
2 regularization multiplier to 80, we varied the stepsize ? over a multiplicative grid with resolution 2 for each number of passes and for each algorithm. Figure 3 reports the results obtained by
selecting the best stepsize in terms of test set log-loss for each number of passes. Figure 3(A) shows
relative log-loss of the best stepsize for each algorithm; 3(B) shows the relative time A SYNC DA
requires with respect to A SYNC A DAG RAD to achieve a given loss. Specifically, Fig. 3(B) shows the
ratio of the number of passes the algorithms require to achieve a fixed loss, which gives a broader
estimate of the speedup obtained by using A SYNC A DAG RAD; speedups range from 3.6? to 12?.
Figure 3(C) shows the optimal stepsizes as a function of the best setting for one pass. The optimal
stepsize decreases moderately for A SYNC A DAG RAD, but are somewhat noisy for A SYNC DA.
It is interesting to note that A SYNC A DAG RAD?s accuracy is largely independent of the ordering of
the training data, while A SYNC DA shows significant variability. This can be seen both in the error
bars on Figure 3(A), and explicitly in Figure 3(D), where we plot one line for each of the five random
seeds used. Thus, while on the one hand A SYNC DA requires somewhat less tuning of the stepsize
and ?2 parameter, tuning A SYNC A DAG RAD is much easier because of its predictable response.
3
Here ?number of passes? more precisely means the expected number of times each example in the dataset
is trained on. That is, each worker thread randomly selects a training example from the dataset for each update,
and we continued making updates until (dataset size) ? (number of passes) updates have been processed.
4
For both algorithms, this is accomplished by adding the term ?80 kxk22 to the ? function. We can achieve
slightly better results for A SYNC A DAG RAD by varying the ?2 penalty with the number of passes.
8
References
[1] P. Auer and C. Gentile. Adaptive and self-confident online learning algorithms. In Proceedings
of the Thirteenth Annual Conference on Computational Learning Theory, 2000.
[2] P. B?uhlmann and S. van de Geer. Statistics for High-Dimensional Data: Methods, Theory and
Applications. Springer, 2011.
[3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning
algorithms. IEEE Transactions on Information Theory, 50(9):2050?2057, September 2004.
[4] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[5] E. Hazan. The convex optimization approach to regret minimization. In Optimization for
Machine Learning, chapter 10. MIT Press, 2012.
[6] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms I & II.
Springer, New York, 1996.
[7] J. Ma, L. K. Saul, S. Savage, and G. M. Voelker. Identifying malicious urls: An application of
large-scale online learning. In Proceedings of the 26th International Conference on Machine
Learning, 2009.
[8] C. Manning and H. Sch?utze. Foundations of Statistical Natural Language Processing. MIT
Press, 1999.
[9] B. McMahan and M. Streeter. Adaptive bound optimization for online convex optimization.
In Proceedings of the Twenty Third Annual Conference on Computational Learning Theory,
2010.
[10] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[11] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):261?283, 2009.
[12] F. Niu, B. Recht, C. R?e, and S. Wright. Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems 24, 2011.
[13] P. Richt?arik and M. Tak?ac? . Parallel coordinate descent methods for big data optimization.
arXiv:1212.0873 [math.OC], 2012. URL http://arxiv.org/abs/1212.0873.
[14] M. Tak?ac? , A. Bijral, P. Richt?arik, and N. Srebro. Mini-batch primal and dual methods for
SVMs. In Proceedings of the 30th International Conference on Machine Learning, 2013.
9
| 4939 |@word middle:2 version:2 stronger:3 suitably:1 p0:14 attainable:1 sgd:5 moment:1 initial:2 contains:1 selecting:1 tuned:1 document:1 current:1 com:1 savage:1 attracted:1 must:1 john:1 numerical:1 subsequent:2 benign:1 plot:4 update:15 juditsky:1 selected:1 complementing:1 ubuntu:1 inspection:1 core:2 characterization:1 iterates:1 provides:1 math:1 org:1 five:3 mathematical:1 prove:1 consists:2 sync:49 paragraph:2 expected:3 indeed:4 behavior:1 roughly:2 actual:1 little:1 increasing:1 becomes:1 begin:3 provided:2 underlying:1 moreover:1 clicked:1 medium:1 lowest:1 argmin:2 substantially:1 string:2 jduchi:1 guarantee:9 berkeley:2 thorough:1 p2j:1 xd:2 k2:7 control:1 enjoy:2 appear:1 t1:3 negligible:1 limit:1 establishing:1 niu:4 parallelize:1 limited:1 nemirovski:1 range:1 obeys:1 averaged:2 practical:1 testing:1 regret:2 prevalence:1 area:1 universal:1 attain:1 significantly:1 matching:1 projection:3 word:1 spite:2 onto:2 applying:2 measurable:1 independently:2 convex:11 resolution:1 simplicity:1 identifying:1 m2:4 insight:1 estimator:5 continued:1 ogwild:6 coordinate:6 increment:2 pt:2 target:2 user:3 programming:2 associate:1 satisfying:1 particularly:1 updating:1 labeled:1 ep:3 observed:1 capture:1 worst:1 ordering:2 counter:2 decrease:1 richt:2 substantial:1 pd:3 convexity:2 complexity:2 environment:1 moderately:1 predictable:1 nesterov:2 trained:1 depend:1 solving:1 upon:1 necessitates:1 various:2 chapter:1 effective:3 query:2 choosing:2 whose:1 quite:1 valued:1 voelker:1 tightness:1 relax:1 otherwise:1 ability:3 statistic:1 itself:1 noisy:1 asynchronously:2 seemingly:1 associative:1 online:5 sequence:1 hiriart:1 product:3 maximal:1 remainder:1 relevant:3 combining:1 achieve:6 description:1 seattle:1 convergence:12 extending:1 generating:1 derive:3 develop:4 illustrate:1 fixing:1 clearer:1 ac:2 received:1 solves:1 dividing:1 implies:2 beg:1 radius:2 stochastic:17 duals:1 stringent:1 require:4 hx:1 pronged:2 fix:1 generalization:1 alleviate:1 investigation:1 proposition:8 duchi1:1 extension:1 strictly:1 hold:5 considered:1 wright:1 exp:1 seed:3 mapping:1 achieves:2 utze:1 estimation:1 applicable:1 label:2 uhlmann:1 sensitive:1 individually:1 hope:1 minimization:2 rough:1 mit:2 arik:2 rather:1 stepsizes:4 varying:1 broader:1 corollary:8 encode:1 focus:1 improvement:2 prevalent:2 modelling:1 brendan:1 attains:1 adversarial:1 sense:3 typically:1 tak:2 selects:1 dual:15 classification:2 marginal:1 construct:1 identical:1 represents:1 nearly:1 report:2 simplify:1 inherent:1 few:6 randomly:2 argminx:2 consisting:1 ab:1 detection:1 centralized:3 interest:1 investigate:3 weakness:1 behind:1 primal:2 implication:1 partial:1 worker:5 unless:1 euclidean:1 initialized:1 theoretical:3 increased:1 instance:2 facet:1 bijral:1 entry:4 predictor:2 delay:1 motivating:1 eec:1 proximal:1 confident:1 recht:1 fundamental:2 international:2 siam:1 receiving:1 pool:2 michael:1 quickly:1 linux:1 again:1 squared:1 cesa:1 leveraged:2 choose:2 worse:1 supp:3 potential:1 de:1 attaining:1 inc:1 satisfy:1 explicitly:1 ad:4 performed:1 multiplicative:1 hogwild:1 hazan:2 doing:2 sup:2 analyze:1 maintains:3 parallel:8 defer:1 slope:1 minimize:2 square:1 accuracy:6 improvable:1 largely:2 efficiently:1 yield:2 advertising:1 processor:18 explain:1 datapoint:1 reach:1 definition:1 proof:2 dataset:12 ask:1 recall:2 carefully:1 actually:1 auer:1 appears:1 higher:1 day:1 follow:1 response:2 though:4 strongly:1 until:1 overfit:1 hand:1 web:1 google:2 continuity:2 logistic:3 infimum:1 perhaps:1 asynchrony:2 effect:2 multiplier:2 regularization:4 read:3 during:1 self:1 auc:1 oc:1 generalized:1 impression:1 theoretic:1 duchi:2 performs:4 meaning:3 instantaneous:1 empirically:1 qp:1 exponentially:1 interpretation:1 significant:2 measurement:1 dag:38 rd:3 tuning:2 grid:1 similarly:1 zeroing:1 language:2 dj:2 had:1 gj:12 phishing:1 etc:1 base:2 recent:3 perspective:2 optimizing:1 inf:5 moderate:1 forcing:1 scenario:2 manipulation:1 certain:1 inequality:2 continue:1 inconsistency:1 accomplished:1 seen:1 additional:2 somewhat:6 preceding:3 mr:3 fortunately:1 parallelized:2 gentile:2 ii:1 recoverable:1 full:2 faster:1 match:1 calculation:1 adapt:1 retrieval:1 impact:4 prediction:3 variant:1 regression:1 essentially:2 expectation:2 metric:1 arxiv:2 iteration:1 sometimes:1 addition:6 thirteenth:1 grow:1 malicious:4 sch:1 parallelization:2 pass:18 comment:1 hz:4 subject:1 leveraging:1 inconsistent:1 jordan:1 leverage:2 split:1 enough:1 xj:10 fit:1 timesteps:1 click:2 simplifies:1 lesser:1 whether:1 specialization:1 six:2 thread:3 url:6 gb:1 effort:1 penalty:3 returned:1 algebraic:1 york:1 remark:2 repeatedly:1 proprietary:1 generally:1 clear:2 mj2:6 amount:2 locally:1 processed:1 svms:1 tth:3 http:1 shapiro:1 exist:2 zj:4 hyperthreading:2 revisit:1 sign:1 track:1 key:1 four:1 lan:1 achieving:1 pj:26 ram:1 subgradient:2 sum:3 run:4 family:1 parsimonious:1 scaling:3 bit:1 bound:18 annual:2 occur:1 constraint:3 precisely:1 scene:1 flat:1 argument:1 optimality:3 min:5 span:1 performing:1 speedup:15 developing:1 manning:1 slightly:2 making:1 presently:2 taken:1 discus:1 turn:1 singer:1 mind:1 encapsulating:1 urruty:1 whichever:1 operation:1 apply:2 appropriate:1 appearing:1 stepsize:18 batch:1 denotes:5 assumes:2 cf:1 running:3 lock:1 maintaining:1 lipschitzian:1 giving:2 restrictive:1 objective:2 question:1 quantity:3 strategy:1 dependence:1 usual:2 diagonal:3 exhibit:2 gradient:21 dp:1 september:1 gxi:1 assuming:2 index:4 mini:1 providing:3 ratio:2 difficult:1 executed:1 potentially:1 negative:1 design:1 twenty:1 perform:3 allowing:1 upper:3 bianchi:1 datasets:2 descent:5 displayed:1 immediate:1 defining:1 situation:1 communication:1 precise:1 variability:1 varied:1 sharp:1 arbitrary:1 parallelizing:1 community:1 upates:1 specified:1 extensive:1 optimized:2 rad:38 california:1 elapsed:1 engine:1 address:1 able:1 suggested:1 adversary:1 parallelism:4 usually:1 bar:1 regime:2 sparsity:18 max:2 power:3 difficulty:2 natural:8 regularized:1 minimax:9 scheme:1 kxk22:1 text:4 review:1 l2:9 adagrad:11 relative:14 law:3 synchronization:1 loss:24 highlight:1 interesting:2 limitation:1 srebro:1 versus:4 validation:1 foundation:1 anonymized:1 echal:1 prone:1 supported:1 asynchronous:16 free:1 jth:2 enjoys:1 allow:2 saul:1 taking:1 sparse:18 benefit:5 van:1 curve:1 dimension:4 computes:4 concretely:1 collection:5 adaptive:7 made:1 spam:1 transaction:1 sj:4 implicitly:3 overfitting:2 investigating:1 xi:4 leader:1 search:2 streeter:2 decomposes:1 additionally:1 mj:14 robust:1 ca:1 expanding:1 excellent:1 domain:5 da:35 diag:1 dense:3 main:1 big:1 nothing:1 allowed:1 x1:2 fig:4 roc:1 wish:1 exponential:1 mcmahan:3 lie:1 kxk2:2 third:1 theorem:10 specific:2 showing:1 jensen:2 r2:2 evidence:1 consist:2 gj2:2 exists:2 adding:2 effectively:2 supplement:1 commutative:1 cartesian:1 kx:6 gap:1 easier:1 simply:1 appearance:2 prevents:1 expressed:1 conconi:1 applies:1 springer:2 aa:1 corresponds:1 minimizer:1 satisfies:2 ma:1 sized:1 lipschitz:2 shared:2 lemar:1 jordan1:1 specifically:2 except:1 abbreviates:1 averaging:7 called:1 geer:1 pas:3 experimental:2 formally:3 support:1 evaluate:1 d1:1 avoiding:1 |
4,353 | 494 | Learning in Feedforward Networks with Nonsmooth
Functions
Nicholas J. Redding?
Information Technology Division
Defence Science and Tech. Org.
P.O. Box 1600 Salisbury
Adelaide SA 5108 Australia
T.Downs
Intelligent Machines Laboratory
Dept of Electrical Engineering
University of Queensland
Brisbane Q 4072 Australia
Abstract
This paper is concerned with the problem of learning in networks where some
or all of the functions involved are not smooth. Examples of such networks are
those whose neural transfer functions are piecewise-linear and those whose error
function is defined in terms of the 100 norm.
Up to now, networks whose neural transfer functions are piecewise-linear have
received very little consideration in the literature, but the possibility of using an
error function defined in terms of the 100 norm has received some attention. In
this latter work, however, the problems that can occur when gradient methods are
used for non smooth error functions have not been addressed.
In this paper we draw upon some recent results from the field of nonsmooth
optimization (NSO) to present an algorithm for the non smooth case. Our motivation for this work arose out of the fact that we have been able to show that,
in backpropagation, an error function based upon the 100 norm overcomes the
difficulties which can occur when using the 12 norm.
1 INTRODUCTION
This paper is concerned with the problem of learning in networks where some or all of
the functions involved are not smooth. Examples of such networks are those whose neural
transfer functions are piecewise-linear and those whose error function is defined in terms
of the 100 norm.
?The author can be contacted via email [email protected].
1056
Learning in Feedforward Networks with Nonsmooth Functions
Up to now. networks whose neural transfer functions are piecewise-linear have received
very little consideration in the literature. but the possibility of using an error function defined
in terms of the ?00 norm has received some attention [1]. In the work described in [1].
however. the problems that can occur when gradient methods are used for nonsmooth error
functions have not been addressed.
In this paper we draw upon some recent results from the field of nonsmooth optimization
(NSO) to present an algorithm for the nonsmooth case. Our motivation for this work arose
out of the fact that we have been able to show [2]1 that an error function based upon the ?00
norm overcomes the difficulties which can occur when using backpropagation's ?2 norm
[4].
The framework for NSO is the class of locally Lipschitzian functions [5]. Locally Lipschitzian functions are a broad class of functions that include. but are not limited to. "smooth to
(completely differentiable) functions. (Note. however. that this framework does not include
step-functions.) We here present a method for training feedforward networks (FFNs) whose
behaviour can be described by a locally Lipschitzian function y = f lJIIt(w, x). where the
input vector x = (Xl, ... , xn) is an element of the set of patterns X C Rn. W E Ril is the
weight vector. and y E Rm is the m-dimensional output.
The possible networks that fit within the locally Lipschitzian framework include any network
that has a continuous. piecewise differentiable description. i.e., continuous functions with
nondifferentiable points ("non smooth functionstt).
Training a network involves the selection of a weight vector W* which minimizes an error
function E( w). As long as the error function E is locally Lipschitzian. then it can be trained
by the procedure that we will outline. which is based upon a new technique for NSO [6].
In Section 2. a description of the difficulties that can occur when gradient methods are
applied to nonsmooth problems is presented. In Section 3. a short overview of the BundleTrust algorithm [6] for NSO is presented. And in Section 4 details of applying a NSO
procedure to training networks with an ?00 based error function are presented. along with
simulation results that demonstrate the viability of the technique.
2 FAll..URE OF GRADIENT METHODS
Two difficulties which arise when gradient methods are applied to nonsmooth problems will
be discussed here. The first is that gradient descent sometimes fails to converge to a local
minimum. and the second relates to the lack of a stopping criterion for gradient methods.
2.1 THE "JAMMING" EFFECT
We will now show that gradient methods can fail to converge to a local minimum (the
"jamming" effect [7.8]). The particular example used here is taken from [9].
Consider the following function. that has a minimum at the point w* = (0,0):
fl(W)
= 3(w? + 2wi).
(1)
If we start at the point Wo = (2,1). it is easily shown that a steepest descent algorithm2
would generate the sequence
WI
= (2, -1)/3.
W2
= (2,1)/9 ?...? so that the sequence
lThis is quite simple. using a theorem due to Krishnan [3].
lrrhis is achieved by repeatedly perfonning a line search along the steepest descent direction.
1057
1058
Redding and Downs
nondifferenliable
half-line
?
Figure 1: A contour plot of the function h.
=
=
{w/c} oscillates between points on the two half-lines Wl
2'lV2 and Wl
-2'lV2 for
Wl ~ O. converging to the optimal point w? = (0,0). Next. from the function ft. create a
new function h in the following manner:
(2)
The gradient at any point of h is proportional to the gradient at the same point on It. so
the sequence of points generated by a gradient descent algorithm starting from (2, 1) on h
will be the same as the case for It. and will again converge3 to the optimal point. again
w?
(0,0).
=
Lastly. we shift the optimal point away from (0, 0), but keep a region including the sequence
{w/c} unchanged to create a new function 13 (w):
hew) = {
V3(w?++41'lV21)
2wD
~(Wl
if 0 ~ 1'lV21
elsewhere.
~ 2Wl
(3)
The new function 13, depicted in fig. 1, is continuous, has a discontinuous derivative only
on the half-line Wl ~ O. 'lV2 = 0, and is convex with a "minimum" as Wl -- -00. In spite
of this, the steepest descent algorithm still converges to the now nonoptimal "jamming"
point (0, 0). A multitude of possible variations to It exist that will achieve a similar result,
but the point is clear: gradient methods can lead to trouble when applied to non smooth
problems.
This lesson is important, because the backpropagation learning algorithm is a smooth
gradient descent technique. and as such will have the difficulties described when it, or an
extension (eg., [1]). are applied to a nonsmooth problem.
2.2 STOPPING CRITERION
The second significant problem associated with smooth descent techniques in a nonsmooth
context occurs with the stopping criterion. In normal smooth circumstances. a stopping
3Note that for this new sequence of points, the gradient no longer converges to 0 at (0,0), but
oscillates between the values v'2(1, ?l).
Learning in Feedforward Networks with Nonsmooth Functions
criterion is determined using
IIV'/II
~
(,
(4)
where ( is a small positive quantity determined by the required accuracy. However, it is
frequently the case that the minimum of a non smooth function occurs at a nondifferentiable
point or "kink", and the gradient is of little value around these points. For example, the
gradient of /( w)
Iwl has a magnitude of 1 no matter how close w is to the optimum at
w = O.
=
3 NONSMOOTH OYfIMIZATION
For any locally Lipschitzian function /, the generalized directional derivative always exists,
and can be used to define a generalized gradient or subdifferential, denoted by 8/. which
, is a compact convex set4 [5]. A particular element g E 8/(w) is termed a subgradientof
/ at W [5,10]. In situations where / is strictly differentiable at w. the generalized gradient
of / at W is equal to the gradient, i.e., 8/( w) = V' /( w).
We will now discuss the basic aspects of NSO and in particular the Bundle-Trust (Bn
algorithm [6].
Quite naturally. subgradients in NSO provide a substitute for the gradients in standard
smooth optimization using gradient descent. Accordingly, in an NSO procedure, we require
the following to be satisfied:
At every w, we can compute /(w) and any g E 8/(w).
(5)
To overcome the jamming effect, however. it is not sufficient replace the gradient with
a subgradient in a gradient descent algorithm - the strictly local information that this
provides about the function's behaviour can be misleading. For example, an approach like
this will not change the descent path taken from the starting point (2,1) on the function h
(see fig. 1).
The solution to this problem is to provide some "smearing" of the gradient information by
enriching the information at w with knowledge of its surroundings. This can be achieved
by replacing the strictly local subgradients g E 8/(w) by UVEB g E 8/(v) where B is a
suitable neighbourhoodofw, and then define the (-generalized gradient 8 f /(w) as
8 /(w)
f
6
co {
u
8/(V)}
(6)
VEB(W,f)
where ( > 0 and small, and co denotes a convex hull. These ideas were first used by [7]
to overcome the lack of continuity in minimax problems, and have become the basis for
extensive work in NSO.
In an optimization procedure. points in a sequence {WI:, k = 0,1, ...} are visited until a
point is reached at which a stopping criterion is satisfied. In a NSO procedure, this occurs
when a point WI: is reached that satisfies the condition 0 E 8f /(wl:). and the point is said
to be (-optimal. That is, in the case of convex /, the point WI: is (-optimal if
(7)
4In other words, a set of vectors will define the generalized gradient of a non smooth function at a
single point. rather than a single vector in the case of smooth functions.
1059
1060
Redding and Downs
and in the case of nonconvex I.
I(Wk)
~ I(w)
+ (IIW -
w,,11
+ (forall wEB
(8)
where B is some neighbourhood of Wk of nonzero dimension. Obviously. as ( ~ 0. then
Wk ~ w? at which 0 E () I( w?). i.e., Wk is ''within (" of the local minimum w?.
Usually the (-generalized gradient is not available. and this is why the bundle concept is
introduced. The basic idea of a bundle concept in NSO is to replace the (-generalized
gradient by some inner approximating polytope P which will then be used to compute a
descent direction. If the polytope P is a sufficiently good approximation to I. then we will
find a direction along which to descend (a so-called serious step). In the case where P is not
a sufficiently good approximation to I to yield a descent direction. then we perfonn a null
step. staying at our current position W. and try to improve P by adding another subgI'adient
() I (v) at some nearby point v to our current position w.
A natural way of approximating I is by using a cutting plane (CP) approximation. The CP
approximation of I( w) at the point Wk is given by the expression [6]
max {gNw - Wi) + I(Wi)}'
(9)
1(i(k
where gi is a subgradient of I at the point Wi. We see then that (9) provides a piecewise
linear approximation of convexs I from below. which will coincide with I at all points Wi.
For convenience, we redefine the CP approximation in terms of d W - Wk. d E Rb , the
vector difference of the point of approximation. W. and the current point in the optimization
sequence. Wk, giving the CP approximation I Cp of I:
=
I CP(Wk, d) = l(i(k
max {g/d + g/(Wk -
w;)
+ I(Wi)}.
(10)
Now, when the CP approximation is minimized to find a descent direction, there is no
reason to trust the approximation far away from Wk. So, to discourage a large step size, a
stabilizing term Ikd td. where tk is positive, is added to the CP approximation.
If the CP approximation at Wk of I is good enough, then the dk given by
dk = arg min I CP(Wk, d) + _1_d td
d
2tk
(11)
will produce a descent direction such that a line search along Wk + Adk will find a new
point Wk+l at which I(Wk+l) < I(Wk) (a serious step). It may happen that I Cp is such a
poor approximation of I that a line search along dk is not a descent direction. or yields only
a marginal improvement in I. If this occurs, a null step is taken and one enriches the bundle
of subgradients from which the CP approximation is computed by adding a subgradient
from () I (Wk + Adk) for small A > O. Each serious step guarantees a decrease in I. and a
stopping criterion is provided by tenninating the algorithm as soon as dk in (11) satisfies
the (-optimality criterion, at which point Wk is (-Optimal. These details are the basis of
bundle methods in NSO [9,10].
The bundle method described suffers from a weak point: its success depends on the delicate
selection of the parameter tk in (11) [6], This weakness has led to the incorporation of a
"trust region" concept [11] into the bundle method to obtain the BT (bundle-trust) algorithm
[6].
SIn the nonconVex f case, (9) is not an approximation to f from below, and additional tolerance
parameters must be considered to accommodate this situation [6].
Learning in Feedforward Networks wirh Nonsmoorh Funcrions
To incorporate a trust region, we define a "radius tt that defines a ball in which we can "trust"
that fa> is a good approximation of f. In the BT algorithm, by following trust region
concepts, the choice of t A: is not made a priori and is determined during the algorithm by
varying tA: in a systematic way (trust part) and improving the CP approximation by null
steps (bundle part) until a satisfactory CP approximation f a> is obtained along with a ball
(in terms of t A:) on which we can trust the approximation. Then the dA: in (11) willlea1 to
a substantial decrease in f.
The full details of the BT algorithm can be found in [6], along with convergence proofs.
4 EXAMPLES
4.1
A SMOOTH NETWORK WITH NONSMOOTH ERROR FUNCTION
The particular network example we consider here is a two-layer FFN (i.e.? one with a single
layer of hidden units) where each output unit's value Yi is computed from its discriminant
function Qo;
WiO+ 2:7=1 Wij Zj, by the transfer function Yi
tanh(Qo;), where Zj is the
tanh(Qhj)'
output of the j-th hidden unit. The j-th hidden unit's output Zj is given by Zj
where QhL" VjO + 2:~=1 VjA:Xj is its discriminant function. The ?00 error function (which
is locally ipschitzian) is defined to be
=
=
=
E(w)
= xeX
max
=
rJ?lX IQo, (x) - ti(X)I,
l(.(m
(12)
where ti (x) is the desired output of output unit i for the input pattern x EX.
To make use of the B T algorithm described in the previous section, it is necessary to obtain
an expression from which a subgradient at w for E( w) in (12) can be computed. Using the
generalized gradient calculus in [5, Proposition 2.3.12], a subgradientg E 8E(w) is given
by the expression6
g
= sgn (QO;I (x') -
ti' (x'))
VWQO;I (x') for some i', x' E .J
(14)
where .J is the set of patterns and output indices for which E (w) in (12) obtains it maximum
value, and the gradient VWQO,I (x') is given by
1
Zj
(1 - zJ)Wilj
x1:(1 - ZJ)Wilj
o
(Note that here j
w.r.t. Wi'O
w.r.t. Wi'j
w.r.t. VjO
w.r.t. VjA:
elsewhere.
(15)
= 1,2, ... , h and k = I, ... , n).
The BT technique outlined in the previous section was applied to the standard XOR and
838 encoder problems using the ?00 error function in (12) and subgradients from (14,15).
6Note that for a function f(w)
expression
= Iwl =
8f(w)
={
max{w, -w}. the generalized gradient is given by the
1
w>o
co{l, -l}
x
x
-1
=0
<0
and a suitable subgradient g E 8 f (w) can be obtained by choosing g = sgn( w ).
(13)
1061
1062
Redding and Downs
In all test runs, the BT algorithm was run until convergence to a local minimum of the too
error function occurred with (set at 10- 4 ? On the XOR problem, over 20 test runs using a
randomly initialized 2-2-1 network, an average of 52 function and subgradient evaluations
were required. The minimum number of function and subgradient evaluations required
in the test runs was 23 and the maximum was 126. On the 838 encoder problem, over
20 test runs using a randomly initialized 8-3-8 network, an average of 334 function and
subgradient evaluations were required. For this problem, the minimum number of function
and subgradient evaluations required in the test runs was 221 and the maximum was 512.
4.2 A NON SMOOTH NETWORK AND NONSMOOTH ERROR FUNCTION
In this section we will consider a particular example that employs a network function that
is nonsmooth as well as a nonsmooth error function (the too error function of the previous
example).
Based on the piecewise-linear network employed by [12], let the i-th output of the network
be given by the expression
n
Yi
h
= L "ikXk + L Wij
k=1
j=1
n
L
VjkXk
+ VjO + WiO
(16)
ti(x)l.
(17)
k=1
with an too -based error function
E(w)
= xeK
max m~ IYi(X) U;,'m
Once again using the generalized gradient calculus from [5, Proposition 2.3.12], a single
subgradient g E 8E(w) is given by the expression
w.r.t. "i'k
w.r.t. Wi'O
w.r.t. Wi'j
w.r.t. VjO
w.r.t. Vjk
elsewhere.
(Note that j
(18)
= 1,2, ... , h, k = 1,2, ... , n).
In all cases the (-stopping criterion is set at 10-4 ? On the XOR problem, over 20 test
runs using a randomly initialized 2-2-1 network, an average of 43 function and subgradient
evaluations were required. The minimum number of function and subgradient evaluations
required in the test runs was 30 and the maximum was 60. On the 838 encoder problem,
over 20 test runs using a randomly initialized 8-3-8 network, an average of 445 function and
subgradient evaluations were required. For this problem, the minimum number of function
and subgradient evaluations required in the test runs was 386 and the maximum was 502.
5 CONCLUSIONS
We have demonstrated the viability of employing NSO for training networks in the case
where standard procedures, with their implicit smoothness assumption, would have difficulties or find impossible. The particular nonsmooth examples we considered involved an
error function based on the too norm, for the case of a network with sigmoidal characteristics
and a network with a piecewise-linear characteristic.
Learning in Feedforward Networks with Nonsmooth Functions
Nonsmooth optimization problems can be dealt with in many different ways. A possible
alternative approach to the one presented here (that works for most NSO problems) is
to express the problem as a composite function and then solve it using the exact penalty
method (termed composite NSO) [11]. Fletcher [11, p. 358] states that in practice this
can require a great deal of storage or be too complicated to formulate. In contrast, the
BT algorithm solves the more general basic NSO problem and so can be more widely
applied than techniques based on composite functions. The BT algorithm is simpler to
set up, but this can be at the cost of algorithm complexity and a computational overhead.
The BT algorithm, however, does retain the gradient descent flavour of backpropagation
because it uses the generalized gradient concept along with a chain rule for computing these
(generalized) gradients. Nongradient-based and stochastic methods for NSO do exist, but
they were not considered here because they do not retain the gradient-based deterministic
flavour. It would be useful to see if these other techniques are faster for practical problems.
The message should be clear however - smooth gradient techniques should be treated with
suspicion when a nonsmooth problem is encountered, and in general the more complicated
nonsmooth methods should be employed.
References
[1] P. Burrascano, "A norm selection criterion for the generalized delta rule," IEEE Transactions on Neural Networks 2 (1991), 125-130.
[2] N. J. Redding, "Some Aspects of Representation and Learning in Artificial Neural
Networks," University of Queensland, PhD Thesis, June, 1991.
[3] T. Krishnan, "On the threshold order of a Boolean function," IEEE Transactions on
Electronic Computers EC-15 (1966),369-372.
[4] M. L. Brady, R. Raghavan & J. Slawny, "Backpropagation fails to separate where
perceptrons succeed," IEEE Transactions on Circuits and Systems 36 (1989).
[5] F. H. Clarke, Optimization and Nonsmooth Analysis, Canadian Mathematical Society
Series of Monographs and Advanced Texts, John Wiley & Sons, New York, NY, 1983.
[6] H. Schramm & J. Zowe, "A version of the bundle idea for minimizing a nonsmooth
function: conceptual ideas, convergence analysis, numerical results," SIAM Journal on
Optimization (1991), to appear.
[7] V. F. Dem'yanov & V. N. Malozemov, Introduction to Minimax, John Wiley & Sons,
New York,NY, 1974.
[8] P. Wolfe, "A method of conjugate subgradients for minimizing nondifferentiable functions," in Nondifferentiable Optimization, M. L. Balinski & P. Wolfe, eds., Mathematical
Programming Study #3, North-Holland, Amsterdam, 1975,145-173.
[9] C. Lemarechal, ''Nondifferentiable Optimization," in Optimization, G. L. Nemhauser,
A. H. G. Rinnooy Kan & M. J. Todd, eds., Handbooks in Operations Research and
Management Science #1, North-Holland,Amsterdam, 1989,529-572.
[10] K. C. Kiwiel, Methods of Descent for Nondifferentiable Optimization, Leet. Notes in
Math. # 1133, Springer-Verlag, New York-Heidelberg-Berlin, 1985.
[11] R. Fletcher, Practical Methods of Optimization second edition, John Wiley & Sons,
New York, NY, 1987.
[12] R. Batruni, "A multilayer neural network with piecewise-linear structure and backpropagation learning," IEEE Transactions on Neural Networks 2 (1991),395-403.
1063
| 494 |@word version:1 norm:10 calculus:2 simulation:1 queensland:2 bn:1 accommodate:1 series:1 current:3 wd:1 must:1 john:3 numerical:1 happen:1 plot:1 xex:1 half:3 accordingly:1 plane:1 steepest:3 short:1 provides:2 math:1 lx:1 org:1 sigmoidal:1 simpler:1 mathematical:2 along:8 contacted:1 become:1 vjk:1 adk:2 overhead:1 redefine:1 kiwiel:1 manner:1 frequently:1 td:2 little:3 provided:1 circuit:1 null:3 minimizes:1 ikd:1 brady:1 veb:1 perfonn:1 guarantee:1 every:1 ti:4 oscillates:2 rm:1 jamming:4 unit:5 appear:1 positive:2 engineering:1 local:6 todd:1 ure:1 path:1 au:1 co:3 limited:1 enriching:1 practical:2 practice:1 backpropagation:6 procedure:6 composite:3 word:1 spite:1 convenience:1 close:1 selection:3 storage:1 context:1 applying:1 impossible:1 adient:1 deterministic:1 demonstrated:1 attention:2 starting:2 convex:4 formulate:1 stabilizing:1 rule:2 variation:1 exact:1 programming:1 us:1 element:2 wolfe:2 ft:1 electrical:1 descend:1 region:4 decrease:2 substantial:1 monograph:1 complexity:1 iiv:1 trained:1 upon:5 division:1 completely:1 basis:2 easily:1 artificial:1 choosing:1 whose:7 quite:2 widely:1 solve:1 encoder:3 xek:1 gi:1 obviously:1 sequence:7 differentiable:3 achieve:1 oz:1 description:2 kink:1 convergence:3 optimum:1 produce:1 converges:2 staying:1 tk:3 vja:2 received:4 sa:1 solves:1 involves:1 direction:7 radius:1 discontinuous:1 zowe:1 hull:1 stochastic:1 australia:2 sgn:2 raghavan:1 require:2 behaviour:2 proposition:2 extension:1 strictly:3 around:1 sufficiently:2 considered:3 itd:1 normal:1 great:1 fletcher:2 tanh:2 visited:1 wl:8 create:2 always:1 defence:1 rather:1 arose:2 forall:1 varying:1 june:1 improvement:1 iwl:2 tech:1 contrast:1 stopping:7 bt:8 hidden:3 wij:2 arg:1 denoted:1 smearing:1 priori:1 marginal:1 field:2 equal:1 once:1 broad:1 minimized:1 nonsmooth:23 intelligent:1 piecewise:9 serious:3 employ:1 surroundings:1 randomly:4 set4:1 delicate:1 vjo:4 message:1 possibility:2 evaluation:8 weakness:1 bundle:10 chain:1 algorithm2:1 necessary:1 initialized:4 desired:1 boolean:1 cost:1 too:5 siam:1 retain:2 systematic:1 again:3 thesis:1 satisfied:2 management:1 derivative:2 schramm:1 wk:18 dem:1 north:2 matter:1 depends:1 try:1 reached:2 start:1 complicated:2 accuracy:1 xor:3 characteristic:2 yield:2 lesson:1 directional:1 dealt:1 redding:5 weak:1 ikxk:1 suffers:1 ed:2 email:1 involved:3 naturally:1 associated:1 proof:1 knowledge:1 ta:1 box:1 implicit:1 lastly:1 until:3 web:1 trust:9 replacing:1 qo:3 lack:2 continuity:1 defines:1 effect:3 concept:5 ril:1 laboratory:1 nonzero:1 satisfactory:1 eg:1 deal:1 sin:1 during:1 criterion:9 generalized:13 outline:1 tt:1 demonstrate:1 cp:14 iiw:1 consideration:2 enriches:1 overview:1 discussed:1 occurred:1 significant:1 smoothness:1 outlined:1 balinski:1 longer:1 iyi:1 recent:2 termed:2 verlag:1 nonconvex:2 success:1 yi:3 minimum:11 additional:1 employed:2 converge:2 v3:1 wilj:2 ii:1 relates:1 full:1 rj:1 smooth:17 faster:1 long:1 converging:1 basic:3 multilayer:1 circumstance:1 sometimes:1 achieved:2 subdifferential:1 addressed:2 brisbane:1 w2:1 feedforward:6 canadian:1 viability:2 concerned:2 krishnan:2 enough:1 xj:1 fit:1 inner:1 idea:4 shift:1 expression:5 wo:1 penalty:1 york:4 repeatedly:1 useful:1 clear:2 locally:7 generate:1 exist:2 zj:7 delta:1 rb:1 express:1 threshold:1 subgradient:14 run:10 electronic:1 draw:2 clarke:1 flavour:2 fl:1 layer:2 encountered:1 occur:5 incorporation:1 hew:1 nearby:1 aspect:2 min:1 optimality:1 nso:18 subgradients:5 gnw:1 poor:1 ball:2 conjugate:1 son:3 wi:14 taken:3 discus:1 fail:1 available:1 operation:1 away:2 nicholas:1 neighbourhood:1 alternative:1 substitute:1 denotes:1 include:3 trouble:1 lipschitzian:6 giving:1 approximating:2 society:1 unchanged:1 added:1 quantity:1 occurs:4 fa:1 said:1 gradient:37 nemhauser:1 separate:1 berlin:1 perfonning:1 nondifferentiable:6 polytope:2 discriminant:2 reason:1 index:1 minimizing:2 nonoptimal:1 descent:17 situation:2 rn:1 introduced:1 required:9 extensive:1 lemarechal:1 able:2 usually:1 pattern:3 below:2 including:1 max:5 suitable:2 difficulty:6 natural:1 treated:1 advanced:1 minimax:2 improve:1 technology:1 misleading:1 suspicion:1 text:1 literature:2 proportional:1 sufficient:1 elsewhere:3 soon:1 fall:1 ffn:1 tolerance:1 overcome:2 dimension:1 xn:1 contour:1 author:1 made:1 coincide:1 far:1 employing:1 ec:1 transaction:4 compact:1 obtains:1 cutting:1 overcomes:2 keep:1 handbook:1 conceptual:1 continuous:3 search:3 lthis:1 why:1 transfer:5 improving:1 heidelberg:1 discourage:1 da:1 motivation:2 arise:1 edition:1 x1:1 fig:2 ny:3 wiley:3 fails:2 position:2 xl:1 down:4 theorem:1 dk:4 multitude:1 exists:1 adding:2 phd:1 magnitude:1 depicted:1 led:1 amsterdam:2 holland:2 springer:1 kan:1 satisfies:2 succeed:1 replace:2 change:1 determined:3 called:1 wio:2 lv2:3 perceptrons:1 latter:1 adelaide:1 incorporate:1 dept:1 ex:1 |
4,354 | 4,940 | Linear Convergence with Condition Number
Independent Access of Full Gradients
Lijun Zhang Mehrdad Mahdavi Rong Jin
Department of Computer Science and Engineering
Michigan State University, East Lansing, MI 48824, USA
{zhanglij,mahdavim,rongjin}@msu.edu
Abstract
For smooth and strongly convex optimizations,
the optimal iteration complexity of
?
the gradient-based algorithm is O( ? log 1/?), where ? is the condition number.
In the case that the optimization problem is ill-conditioned, we need to evaluate a
large number of full gradients, which could be computationally expensive. In this
paper, we propose to remove the dependence on the condition number by allowing
the algorithm to access stochastic gradients of the objective function. To this end,
we present a novel algorithm named Epoch Mixed Gradient Descent (EMGD) that
is able to utilize two kinds of gradients. A distinctive step in EMGD is the mixed
gradient descent, where we use a combination of the full and stochastic gradients
to update the intermediate solution. Theoretical analysis shows that EMGD is
able to find an ?-optimal solution by computing O(log 1/?) full gradients and
O(?2 log 1/?) stochastic gradients.
1
Introduction
Convex optimization has become a tool central to many areas of engineering and applied sciences,
such as signal processing [20] and machine learning [24]. The problem of convex optimization is
typically given as
min F (w),
w?W
where W is a convex domain, and F (?) is a convex function. In most cases, the optimization algorithm for solving the above problem is an iterative process, and the convergence rate is characterized
by the iteration complexity, i.e., the number of iterations needed to find an ?-optimal solution [3,17].
In this study, we focus on first order methods, where we only have the access to the (stochastic)
gradient of the objective function. For most convex optimization problems, the iteration complexity
of an optimization algorithm depends on the following two factors.
1. The analytical properties of the objective function. For example, is F (?) smooth or strongly
convex?
2. The information that can be elicited about the objective function. For example, do we have
access to the full gradient or the stochastic gradient of F (?)?
The optimal iteration complexities for some popular combinations of the above two factors are summarized in Table 1 and elaborated in the related work section. We observe that when the objective
function is smooth (and strongly convex), the convergence rate for full gradients is much faster than
that for stochastic gradients. On the other hand, the evaluation of a stochastic gradient is usually
significantly more efficient than that of a full gradient. Thus, replacing full gradients with stochastic
gradients essentially trades the number of iterations with a low computational cost per iteration.
1
Table 1: The optimal iteration complexity of convex optimization. L and ? are the moduli of
smoothness and strong convexity, respectively. ? = L/? is the condition number.
Full Gradient
Stochastic Gradient
Lipschitz continuous
O ?12
O ?12
Smooth
O ?L?
O ?12
Smooth & Strongly Convex
?
O ? log 1?
1
O ??
In this work, we consider the case when the objective
function is both smooth and strongly convex,
?
where the optimal iteration complexity is O( ? log 1? ) if the optimization method is first order
and has access to the full gradients [17]. For the optimization problems that are ill-conditioned, the
condition number ? can be very large, leading to many evaluations of full gradients, an operation that
is computationally expensive for large data sets. To reduce the computational cost, we are interested
in the
? possibility of making the number of full gradients required independent from ?. Although the
O( ? log 1? ) rate is in general not improvable for any first order method, we bypass this difficulty by
allowing the algorithm to have access
? to both full and stochastic gradients. Our objective is to reduce
the iteration complexity from O( ? log 1? ) to O(log 1? ) by replacing most of the evaluations of full
gradients with the evaluations of stochastic gradients. Under the assumption that stochastic gradients
can be computed efficiently, this tradeoff could lead to a significant improvement in computational
efficiency.
To this end, we developed a novel optimization algorithm named Epoch Mixed Gradient Descent
(EMGD). It divides the optimization process into a sequence of epochs, an idea that is borrowed
from the epoch gradient descent [9]. At each epoch, the proposed algorithm performs mixed gradient descent by evaluating one full gradient and O(?2 ) stochastic gradients. It achieves a constant
reduction in the optimization error for every epoch, leading to a linear convergence rate. Our analysis shows that EMGD is able to find an ?-optimal solution by computing O(log 1? ) full gradients and
O(?2 log 1? ) stochastic gradients. In other words, with the help of stochastic gradients, the number
?
of full gradients required is reduced from O( ? log 1? ) to O(log 1? ), independent from the condition
number.
2
Related Work
During the last three decades, there have been significant advances in convex optimization [3,15,17].
In this section, we provide a brief review of the first order optimization methods.
We first discuss deterministic optimization, where the gradient of the objective function is available.
For the general convex and Lipschitz continuous optimization problem, the iteration complexity
of gradient (subgradient) descent is O( ?12 ), which is optimal up to constant factors [15]. When
the objective function is convex and smooth, the optimal optimization scheme is the accelerated
gradient descent developed by Nesterov, whose iteration complexity is O( ?L? ) [16, 18]. With slight
modifications, the accelerated gradient descent algorithm can also be applied
? to optimize the smooth
and strongly convex objective function, whose iteration complexity is O( ? log 1? ) and is in general
not improvable [17, 19]. The objective of our work is to reduce the number of accesses to the full
gradients by exploiting the availability of stochastic gradients.
In stochastic optimization, we have access to the stochastic gradient, which is an unbiased estimate
of the full gradient [14]. Similar to the case in deterministic optimization, if the objective function
is convex and Lipschitz continuous, stochastic gradient (subgradient) descent is the optimal algorithm and the iteration complexity is also O( ?12 ) [14, 15]. When the objective function is ?-strongly
1
convex, the algorithms proposed in very recent works [9, 10, 21, 26] achieve the optimal O( ??
) iteration complexity [1]. Since the convergence rate of stochastic optimization is dominated by the
randomness in the gradient [6, 11], smoothness usually does not lead to a faster convergence rate for
stochastic optimization. A variant of stochastic optimization is the ?semi-stochastic? approximation,
which interleave stochastic gradient descent and full gradient descent [12]. In the strongly convex
case, if the stochastic gradients are taken at a decreasing rate, the convergence rate can be improved
1
) [13].
to approach O( ??
?
2
From the above discussion, we observe that the iteration complexity in stochastic optimization is
polynomial in 1? , making it difficult to find high-precision solutions. However, when the objective
function is strongly convex and can be written as a sum of a finite number of functions, i.e.,
n
F (w) =
1X
fi (w),
n i=1
(1)
where each fi (?) is smooth, the iteration complexity of some specific algorithms may exhibit a logarithmic dependence on 1? , i.e., a linear convergence rate. The two very recent works are the stochastic
average gradient (SAG) [22], whose iteration complexity is O(n log 1? ), provided n ? 8?, and the
stochastic dual coordinate ascent (SDCA) [23], whose iteration complexity is O((n + ?) log 1? ).1
Under approximate conditions, the incremental gradient method [2] and the hybrid method [5] can
also minimize the function in (1) with a linear convergence rate. But those algorithms usually treat
one pass of all fi ?s (or the subset of fi ?s) as one iteration, and thus have high computational cost per
iteration.
3
3.1
Epoch Mixed Gradient Descent
Preliminaries
In this paper, we assume there exist two oracles.
1. The first one is a gradient oracle Og , which for a given input point w ? W returns the
gradient ?F (w), that is,
Og (w) = ?F (w).
2. The second one is a function oracle Of , each call of which returns a random function f (?),
such that
F (w) = Ef [f (w)], ?w ? W,
and f (?) is L-smooth, that is,
k?f (w) ? ?f (w? )k ? Lkw ? w? k, ?w, w? ? W.
(2)
Although we do not define a stochastic gradient oracle directly, the function oracle Of allows us to
evaluate the stochastic gradient of F (?) at any point w ? W.
Notice that the assumption about the function oracle Of implies that the objective function F (?) is
also L-smooth. Since ?F (w) = Ef ?f (w), by Jensen?s inequality, we have
(2)
k?F (w) ? ?F (w? )k ? Ef k?f (w) ? ?f (w? )k ? Lkw ? w? k, ?w, w? ? W.
(3)
Besides, we further assume F (?) is ?-strongly convex, that is,
k?F (w) ? ?F (w? )k ? ?kw ? w? k, ?w, w? ? W.
(4)
From (3) and (4), it is obvious that L ? ?. The condition number ? is defined as the ratio between
them. i.e., ? = L/? ? 1.
3.2
The Algorithm
The detailed steps of the proposed Epoch Mixed Gradient Descent (EMGD) are shown in Algorithm 1, where we use the superscript for the index of epoches, and the subscript for the index of
iterations at each epoch. We denote by B(x; r) the ?2 ball of radius r around the point x.
Similar to the epoch gradient descent (EGD) [9], we divided the optimization process into a sequence
of epochs (step 3 to step 10). While the number of accesses to the gradient oracle in EGD increases
exponentially over the epoches, the number of accesses to the two oracles in EMGD is fixed.
1
In order to apply SDCA, we need to assume each function fi is ?-strongly convex, so that we can rewrite
fi (w) as gi (w) + ?2 kwk2 , where gi (w) = fi (w) ? ?2 kwk2 is convex.
3
Algorithm 1 Epoch Mixed Gradient Descent (EMGD)
Input: step size ?, the initial domain size ?1 , the number of iterations T per epoch, and the number
of epoches m
?1 = 0
1: Initialize w
2: for k = 1, . . . , m do
?k
3:
Set w1k = w
? k)
4:
Call the gradient oracle Og to obtain ?F (w
5:
for t = 1, . . . , T do
6:
Call the function oracle Of to obtain a random function ftk (?)
7:
Compute the mixed gradient as
?tk = ?F (w
? k ) + ?ftk (wtk ) ? ?ftk (w
? k)
g
8:
Update the solution by
k
wt+1
=
9:
end for
? k+1 =
10:
Set w
11: end for
1
T +1
? m+1
Return w
PT +1
t=1
argmin
? k ;?k )
w?W?B(w
1
?tk i + kw ? wtk k2
?hw ? wtk , g
2
?
wtk and ?k+1 = ?k / 2
? k obtained
At the beginning of each epoch, we initialize the solution w1k to be the average solution w
k
? ). At each iteration t
from the last epoch, and then call the gradient oracle Og to obtain ?F (w
of epoch k, we call the function oracle Of to obtain a random function ftk (?) and define the mixed
gradient at the current solution wtk as
?tk = ?F (w
? k ) + ?ftk (wtk ) ? ?ftk (w
? k ),
g
which involves both the full gradient and the stochastic gradient. The mixed gradient can be divided
? k ) and the stochastic part ?ftk (wtk ) ? ?ftk (w
? k ). Due
into two parts: the deterministic part ?F (w
k
to the smoothness property of ft (?) and the shrinkage of the domain size, the norm of the stochastic
part is well bounded, which is the reason why our algorithm can achieve linear convergence.
Based on the mixed gradient, we update wtk by a gradient mapping over a shrinking domain (i.e.,
? k ; ?k )) in step 8. Since the updating is similar to the standard gradient descent except for
W ? B(w
the domain constraint, we refer to it as mixed gradient descent for short. At the end of the iteration
for epoch k, we compute the average value
? of T + 1 solutions, instead of T solutions, and update
the domain size by reducing a factor of 2.
3.3
The Convergence Rate
The following theorem shows the convergence rate of the proposed algorithm.
Theorem 1. Assume
??e
?1/2
1152L2 1
ln , and ?1 ? max
, T ?
?2
?
r
2
(F (0) ? F (w? )).
?
(5)
?
? m+1 be the solution returned by Algorithm 1 after m epoches that has m
Set ? = 1/[L T ]. Let w
accesses to oracle Og and mT accesses to oracle Of . Then, with a probability at least 1 ? m?, we
have
[?1 ]2
?[?1 ]2
? m+1 ? w? k2 ? m .
? m+1 ) ? F (w? ) ? m+1 , and kw
F (w
2
2
Theorem 1 immediately implies that EMGD is able to achieve an ? optimization error by computing
O(log 1? ) full gradients and O(?2 log 1? ) stochastic gradients.
4
Table 2: The computational complexity for minimizing
Nesterov?s algorithm [17]
?
O ?n log 1?
3.4
EMGD
2
O (n + ? ) log
1
?
1
n
Pn
SAG (n ? 8?) [22]
O n log 1?
i=1
fi (w)
SDCA [23]
O (n + ?) log
1
?
Comparisons
Compared to the optimization algorithms that only rely?on full gradients [17], the number of full
gradients needed in EMGD is O(log 1? ) instead of O( ? log 1? ). Compared to the optimization
algorithms that only rely on stochastic gradients [9,10,21], EMGD is more efficient since it achieves
a linear convergence rate.
The proposed EMGD algorithmP
can also be applied to the special optimization problem considered
n
in [22, 23], where F (w) = n1 i=1 fi (w). To make quantitative comparisons, let?s assume the
full gradient is n times more expensive to compute than the stochastic gradient. Table 2 lists the
computational complexities of the algorithms that enjoy linear convergence. As can be seen, the
computational complexity of EMGD is lower than Nesterov?s algorithm [17] as long as the condition
number ? ? n2/3 , the complexity of SAG [22] is lower than Nesterov?s algorithm if ? ? n/8, and
the complexity of SDCA [23] is lower than Nesterov?s algorithm if ? ? n2 .2 The complexity of
EMGD is on the same order as SAG and SDCA when ? ? n1/2 , but higher in other cases. Thus, in
terms of computational cost, EMGD may not be the best one, but it has advantages in other aspects.
1. Unlike SAG and SDCA that only work for unconstrained optimization problem, the proposed algorithm works for both constrained and unconstrained optimization problems, provided that the constrained problem in Step 8 can be solved efficiently.
2. Unlike the SAG and SDCA that require an ?(n) storage space, the proposed algorithm
only requires the storage space of ?(d), where d is the dimension of w.
3. The only step in Algorithm 1 that has dependence on n is step 4 for computing the gradient
? k ). By utilizing distributed computing, the running time of this step can be reduced
?F (w
to O(n/k), where k is the number of computers, and the convergence rate remains the
same. For SAG and SDCA , it is unclear whether they can reduce the running time without
affecting the convergence rate.
4. The linear convergence of SAG and SDCA only holds in expectation, whereas the linear
convergence of EMGD holds with a high probability, which is much stronger.
4
The Analysis
In the proof, we frequently use the following property of strongly convex functions [9].
Lemma 1. Let f (x) be a ?-strongly convex function over the domain X , and x?
argminx?X f (x). Then, for any x ? X , we have
f (x) ? f (x? ) ?
4.1
?
kx ? x? k2 .
2
=
(6)
The Main Idea
The Proof of Theorem 1 is based on induction. From the assumption about ?1 in (5), we have
(5)
? 1 ) ? F (w? ) ?
F (w
(5), (6)
?[?1 ]2
? 1 ? w? k2 ? [?1 ]2 ,
, and kw
2
P
?
In machine learning, we usually face a regularized optimization problem minw?W n1 n
i=1 ?(yi ; xi w) +
where ?(?; ?) is some loss function. When the norm of the data is bounded, the smoothness parameter
L can be treated as a constant. The strong convexity parameter ? is lower bounded by ? . Thus, as long as
? > ?(n?2/3 ), which is a reasonable scenario [25], we have ? < O(n2/3 ), indicating our proposed EMGD
can be applied.
2
?
kwk2 ,
2
5
which means Theorem 1 is true for m = 0. Suppose Theorem 1 is true for m = k. That is, with a
probability at least 1 ? k?, we have
?[?1 ]2
[?1 ]2
? k+1 ) ? F (w? ) ? k+1 , and kw
? k+1 ? w? k2 ?
F (w
.
2
2k
Our goal is to show that after running the k+1-th epoch, with a probability at least 1 ? (k + 1)?, we
have
[?1 ]2
?[?1 ]2
? k+2 ? w? k2 ? k+1 .
? k+2 ) ? F (w? ) ? k+2 , and kw
F (w
2
2
4.2
The Details
? be the solution obtained
For the simplicity of presentation, we drop the index k for epoch. Let w
from the epoch k. Given the condition
?
? ? w? k2 ? ?2 ,
? ? F (w? ) ? ?2 , and kw
(7)
F (w)
2
b
we will show that after running the T iterations in one epoch, the new solution, denoted by w,
satisfies
1
?
b ? w? k2 ? ?2 ,
b ? F (w? ) ? ?2 , and kw
(8)
F (w)
4
2
with a probability at least 1 ? ?.
Define
? Fb (w) = F (w) ? hw, gi, and gt (w) = ft (w) ? hw, ?ft (w)i.
?
g = ?F (w),
The objective function can be rewritten as
F (w) = hw, gi + Fb (w).
(9)
(10)
And the mixed gradient can be rewritten as
? k = g + ?gt (wt ).
g
Then, the updating rule given in Algorithm 1 becomes
wt+1 =
argmin
?
w?W?B(w,?)
1
?hw ? wt , g + ?gt (wt )i + kw ? wt k2 .
2
(11)
Notice that the objective function in (11) is 1-strongly convex. Using the fact that w? ? W ?
? ?) and Lemma 1 (with x? = wt+1 and x = w? ), we have
B(w;
1
?hwt+1 ? wt , g + ?gt (wt )i + kwt+1 ? wt k2
2
(12)
1 ?
1
?
??hw ? wt , g + ?gt (wt )i + kw ? wt k2 ? kw? ? wt+1 k2 .
2
2
For each iteration t in the current epoch, we have
F (wt ) ? F (w? )
?
kwt ? w? k2
2
D
E ?
(10)
= hg + ?gt (wt ), wt ? w? i + ?Fb (wt ) ? ?gt (wt ), wt ? w? ? kwt ? w? k2 ,
2
(4)
?h?F (wt ), wt ? w? i ?
and
(13)
hg + ?gt (wt ), wt ? w? i
kwt ? w? k2
kwt+1 ? w? k2
kwt ? wt+1 k2
?
?
2?
2?
2?
kwt ? w? k2
kwt+1 ? w? k2
?hg, wt ? wt+1 i +
?
2?
2?
kwt ? wk2
+ max h?gt (wt ), wt ? wi ?
w
2?
kwt ? w? k2
kwt+1 ? w? k2
?
=hg, wt ? wt+1 i +
?
+ k?gt (wt )k2 .
2?
2?
2
(12)
? hg + ?gt (wt ), wt ? wt+1 i +
6
(14)
Combining (13) and (14), we have
F (wt ) ? F (w? )
kwt ? w? k2
kwt+1 ? w? k2
?
?
? kwt ? w? k2
2?
2?
2
E
D
?
+ hg, wt ? wt+1 i + k?gt (wt )k2 + ?Fb (wt ) ? ?gt (wt ), wt ? w? .
2
By adding the inequalities of all iterations, we have
?
T
X
t=1
?
(F (wt ) ? F (w? ))
T
? ? w? k2
kwT +1 ? w? k2
?X
kw
? ? wT +1 i
kwt ? w? k2 + hg, w
?
?
2?
2?
2 t=1
+
?
2
T
X
t=1
|
k?gt (wt )k2 +
{z
,AT
Since F (?) is L-smooth, we have
}
T
X
t=1
|
(15)
h?Fb (wt ) ? ?gt (wt ), wt ? w? i .
{z
}
,BT
? ? h?F (w),
? wT +1 ? wi
? +
F (wT +1 ) ? F (w)
L
? ? wT +1 k2 ,
kw
2
which implies
? ? wT +1 i ? F (w)
? ? F (wT +1 ) +
hg, w
L 2
?
2
L
?
? F (w ) ? F (wT +1 ) + ?2 + ?2 ? F (w? ) ? F (wT +1 ) + L?2 .
2
2
From (15) and (16), we have
T
+1
X
1
?
(F (wt ) ? F (w? )) ? ?2
+ L + AT + BT .
2?
2
t=1
(7)
(16)
?
(17)
Next, we consider how to bound AT and BT . The upper bound of AT is given by
AT =
T
X
t=1
k?gt (wt )k2 =
T
X
t=1
(2)
? 2 ? L2
k?ft (wt ) ? ?ft (w)k
T
X
t=1
? 2 ? T L2 ?2 .
kwt ? wk
(18)
To bound BT , we need the Hoeffding-Azuma inequality stated below [4].
Lemma 2. Let V1 , V2 , . . . be a martingale difference sequence with respect to some sequence
X1 , X2 , . . . such that Vi ? [Ai , Ai + ci ] for some random
variable Ai , measurable with respect
Pn
to X1 , . . . , Xi?1 and a positive constant ci . If Sn = i=1 Vi , then for any t > 0,
2t2
Pr[Sn > t] ? exp ? Pn 2 .
i=1 ci
Define
Vt = h?Fb (wt ) ? ?gt (wt ), wt ? w? i, t = 1, . . . , T.
Recall the definition of Fb (?) and gt (?) in (9). Based on our assumption about the function oracle
Of , it is straightforward to check that V1 , . . . is a martingale difference with respect to g1 , . . .. The
value of Vt can be bounded by
|Vt |
?
b
?F (wt ) ? ?gt (wt )
kwt ? w? k
?
? + k?ft (wt ) ? ?ft (w)k)
?
2? (k?F (wt ) ? ?F (w)k
?
? ? 4L?2 .
4L?kwt ? wk
(2), (3)
7
Following Lemma 2, with a probability at least 1 ? ?, we have
r
1
2
BT ? 4L? 2T ln .
?
(19)
By adding the inequalities in (17), (18) and (19) together, with a probability at least 1 ? ?, we have
!
r
T
+1
X
1
?T L2
1
?
2
(F (wt ) ? F (w )) ? ?
.
+L+
+ 4L 2T ln
2?
2
?
t=1
?
By choosing ? = 1/[L T ], we have
!
r
r
T
+1
X
?
1 (5)
1
?
2
2
(F (wt ) ? F (w )) ? L?
(20)
T + 1 + 4 2T ln
? 6L? 2T ln ,
?
?
t=1
where in the second inequality we use the condition ? ? e?1/2 in (5). By Jensen?s inequality, we
have
p
T +1
(20)
1 X
?
2 6L 2 ln 1/?
?
?
b ? F (w ) ?
,
(F (wt ) ? F (w )) ? ?
F (w)
T + 1 t=1
T +1
and therefore
(6)
Thus, when
b ? w? k2 ?
kw
p
12L 2 ln 1/?
2
?
b ? F (w? ) ? ?2
.
F (w)
?
? T +1
T ?
1152L2 1
ln ,
?2
?
with a probability at least 1 ? ?, we have
5
b ? F (w? ) ?
F (w)
Conclusion and Future Work
1
? 2
b ? w? k2 ? ?2 .
? , and kw
4
2
In this paper, we consider how to reduce the number of full gradients needed for smooth and strongly
convex optimization problems. Under the assumption that both the gradient and the stochastic gradient are available, a novel algorithm named Epoch Mixed Gradient Descent (EMGD) is proposed.
Theoretical analysis shows that with
? the help of stochastic gradients, we are able to reduce the number of gradients needed from O( ? log 1? ) to O(log 1? ). In the case that the objective function is in
the form of (1), i.e., a sum of n smooth functions, EMGD has lower computational cost than the full
gradient method [17], if the condition number ? ? n2/3 .
In practice, a drawback of EMGD is that it requires the condition number ? is known beforehand.
We will interstage how to find a good estimation of ? in future. When the objective function is
2
a sum of some special functions, such as the square loss (i.e., (yi ? x?
i w) ), we can estimate
the condition number by sampling. In particular, the Hessian matrix estimated from a subset of
functions, combined with the concentration inequalities for matrix [7], can be used to bound the
eigenvalues of the true Hessian matrix and consequentially ?. Furthermore, if there exists a strongly
convex regularizer in the objective function, which happens in many machine learning problems [8],
the knowledge of the regularizer itself allows us to find an upper bound of ?.
Acknowledgments
This work is partially supported by ONR Award N000141210431 and NSF (IIS-1251031).
8
References
[1] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds
on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory,
58(5):3235?3249, 2012.
[2] D. P. Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM Journal
on Optimization, 7(4):913?926, 1997.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[5] M. Friedlander and M. Schmidt. Hybrid deterministic-stochastic methods for data fitting. SIAM Journal
on Scientific Computing, 34(3):A1380?A1405, 2012.
[6] S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex stochastic
composite optimization i: a generic algorithmic framework. SIAM Journal on Optimization, 22(4):1469?
1492, 2012.
[7] A. Gittens and J. A. Tropp. Tail bounds for all eigenvalues of a sum of random matrices. ArXiv e-prints,
arXiv:1104.4513, 2011.
[8] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Series in
Statistics. Springer New York, 2009.
[9] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic
strongly-convex optimization. In Proceedings of the 24th Annual Conference on Learning Theory, pages
421?436, 2011.
[10] A. Juditsky and Y. Nesterov. Primal-dual subgradient methods for minimizing uniformly convex functions. Technical report, 2010.
[11] G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133:365?
397, 2012.
[12] K. Marti. On solutions of stochastic programming problems by descent procedures with stochastic and
deterministic directions. Methods of Operations Research, 33:281?293, 1979.
[13] K. Marti and E. Fuchs. Rates of convergence of semi-stochastic approximation procedures for solving
stochastic optimization problems. Optimization, 17(2):243?265, 1986.
[14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[15] A. Nemirovski and D. B. Yudin. Problem complexity and method efficiency in optimization. John Wiley
& Sons Ltd, 1983.
[16] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence
O(1/k2 ). Doklady AN SSSR (translated as Soviet. Math. Docl.), 269:543?547, 1983.
[17] Y. Nesterov. Introductory lectures on convex optimization: a basic course, volume 87 of Applied optimization. Kluwer Academic Publishers, 2004.
[18] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
[19] Y. Nesterov. Gradient methods for minimizing composite objective function. Core discussion papers,
2007.
[20] D. P. Palomar and Y. C. Eldar, editors. Convex Optimization in Signal Processing and Communications.
2010, Cambridge University Press.
[21] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic
optimization. In Proceedings of the 29th International Conference on Machine Learning, pages 449?456,
2012.
[22] N. L. Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence
rate for finite training sets. In Advances in Neural Information Processing Systems 25, pages 2672?2680,
2012.
[23] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567?599, 2013.
[24] S. Sra, S. Nowozin, and S. J. Wright, editors. Optimization for Machine Learning. The MIT Press, 2011.
[25] Q. Wu and D.-X. Zhou. Svm soft margin classifiers: Linear programming versus quadratic programming.
Neural Computation, 17(5):1160?1187, 2005.
[26] L. Zhang, T. Yang, R. Jin, and X. He. O(log T ) projections for stochastic optimization of smooth and
strongly convex functions. In Proceedings of the 30th International Conference on Machine Learning
(ICML), pages 621?629, 2013.
9
| 4940 |@word polynomial:1 interleave:1 norm:2 stronger:1 reduction:1 initial:1 series:1 current:2 egd:2 written:1 john:1 remove:1 drop:1 update:4 juditsky:2 beginning:1 short:1 core:1 math:1 zhang:3 mathematical:2 become:1 fitting:1 introductory:1 lansing:1 frequently:1 decreasing:1 becomes:1 provided:2 bounded:4 kind:1 argmin:2 developed:2 quantitative:1 every:1 sag:8 doklady:1 k2:35 classifier:1 enjoy:1 bertsekas:1 positive:1 engineering:2 treat:1 w1k:2 subscript:1 lugosi:1 wk2:1 nemirovski:2 docl:1 acknowledgment:1 practice:1 regret:1 n000141210431:1 procedure:2 sdca:9 area:1 significantly:1 composite:3 boyd:1 projection:1 word:1 storage:2 lijun:1 optimize:1 measurable:1 deterministic:5 ghadimi:1 straightforward:1 kale:1 convex:37 simplicity:1 roux:1 immediately:1 rule:1 utilizing:1 vandenberghe:1 coordinate:2 pt:1 suppose:1 palomar:1 shamir:1 programming:6 element:1 expensive:3 updating:2 ft:7 solved:1 trade:1 convexity:2 complexity:24 nesterov:10 solving:2 rewrite:1 distinctive:1 efficiency:2 translated:1 regularizer:2 soviet:1 choosing:1 shalev:1 whose:4 statistic:1 gi:4 g1:1 itself:1 superscript:1 sequence:4 advantage:1 eigenvalue:2 analytical:1 propose:1 combining:1 achieve:3 exploiting:1 convergence:21 incremental:2 tk:3 help:2 borrowed:1 strong:2 involves:1 implies:3 direction:1 radius:1 drawback:1 sssr:1 stochastic:54 require:1 preliminary:1 rong:1 hold:2 around:1 considered:1 wright:1 exp:1 mapping:1 algorithmic:1 achieves:2 hwt:1 estimation:1 tool:1 minimization:4 mit:1 pn:3 zhou:1 shrinkage:1 og:5 focus:1 improvement:1 check:1 typically:1 bt:5 interested:1 dual:3 ill:2 eldar:1 denoted:1 constrained:2 special:2 initialize:2 a1380:1 sampling:1 kw:15 icml:1 future:2 t2:1 report:1 kwt:19 argminx:1 n1:3 friedman:1 possibility:1 evaluation:4 a1405:1 primal:1 hg:8 beforehand:1 minw:1 divide:1 theoretical:2 soft:1 cost:5 subset:2 combined:1 international:2 siam:4 together:1 central:1 cesa:1 hoeffding:1 leading:2 return:3 mahdavi:1 summarized:1 wk:2 availability:1 depends:1 vi:2 hazan:1 elicited:1 elaborated:1 minimize:1 square:2 improvable:2 efficiently:2 randomness:1 definition:1 obvious:1 proof:2 mi:1 popular:1 recall:1 knowledge:1 higher:1 improved:1 strongly:20 furthermore:1 hand:1 tropp:1 replacing:2 scientific:1 modulus:1 usa:1 true:3 unbiased:1 during:1 game:1 theoretic:1 performs:1 novel:3 fi:9 ef:3 mt:1 exponentially:1 volume:1 tail:1 slight:1 he:1 kluwer:1 kwk2:3 significant:2 refer:1 cambridge:3 ai:3 smoothness:4 unconstrained:3 access:12 gt:19 recent:2 scenario:1 inequality:7 onr:1 vt:3 yi:2 seen:1 signal:2 semi:2 ii:1 full:27 smooth:17 technical:1 faster:2 characterized:1 academic:1 bach:1 long:2 divided:2 ravikumar:1 award:1 prediction:1 variant:1 basic:1 essentially:1 expectation:1 arxiv:2 iteration:28 agarwal:1 affecting:1 whereas:1 publisher:1 unlike:2 ascent:2 sridharan:1 call:5 yang:1 intermediate:1 hastie:1 reduce:6 idea:2 tradeoff:1 whether:1 bartlett:1 fuchs:1 ltd:1 returned:1 hessian:2 york:1 detailed:1 mahdavim:1 reduced:2 shapiro:1 exist:1 nsf:1 notice:2 estimated:1 per:3 tibshirani:1 lan:3 utilize:1 v1:2 subgradient:3 sum:4 named:3 reasonable:1 wu:1 bound:7 quadratic:1 oracle:16 annual:1 constraint:1 ftk:8 x2:1 dominated:1 aspect:1 lkw:2 min:1 department:1 combination:2 ball:1 son:1 wi:2 gittens:1 making:3 modification:1 happens:1 wtk:8 pr:1 taken:1 computationally:2 ln:8 remains:1 discus:1 needed:4 end:5 available:2 operation:2 rewritten:2 apply:1 observe:2 v2:1 generic:1 schmidt:2 running:4 objective:21 print:1 concentration:1 dependence:3 mehrdad:1 unclear:1 exhibit:1 gradient:92 reason:1 induction:1 besides:1 index:3 ratio:1 minimizing:3 difficult:1 stated:1 allowing:2 upper:2 bianchi:1 finite:2 jin:2 descent:20 communication:1 required:2 able:5 beyond:1 usually:4 below:1 azuma:1 max:2 wainwright:1 difficulty:1 hybrid:2 rely:2 regularized:2 treated:1 scheme:1 brief:1 sn:2 epoch:27 review:1 l2:5 friedlander:1 loss:3 lecture:1 mixed:14 versus:1 editor:2 bypass:1 nowozin:1 course:1 supported:1 last:2 face:1 barrier:1 distributed:1 dimension:1 evaluating:1 yudin:1 fb:7 transaction:1 approximate:1 consequentially:1 xi:2 shwartz:1 msu:1 iterative:1 continuous:3 decade:1 why:1 table:4 robust:1 sra:1 rongjin:1 domain:7 main:1 n2:4 x1:2 martingale:2 wiley:1 precision:1 shrinking:1 exponential:1 marti:2 hw:6 theorem:6 specific:1 jensen:2 list:1 rakhlin:1 svm:1 exists:1 adding:2 ci:3 conditioned:2 kx:1 margin:1 michigan:1 logarithmic:1 partially:1 springer:2 satisfies:1 goal:1 presentation:1 lipschitz:3 except:1 reducing:1 uniformly:1 wt:68 lemma:4 pas:1 east:1 indicating:1 accelerated:2 evaluate:2 |
4,355 | 4,941 | Mixed Optimization for Smooth Functions
Mehrdad Mahdavi
Lijun Zhang
Rong Jin
Department of Computer Science and Engineering, Michigan State University, MI, USA
{mahdavim,zhanglij,rongjin}@msu.edu
Abstract
It is well known that the optimal
convergence rate for stochastic optimization of
?
smooth functions is O(1/ T ), which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of O(1/T 2 ). In this
work, we consider a new setup for optimizing smooth functions, termed as Mixed
Optimization, which allows to access both a stochastic oracle and a full gradient
oracle. Our goal is to significantly improve the convergence rate of stochastic optimization of smooth functions by having an additional small number of accesses
to the full gradient oracle. We show that, with an O(ln T ) calls to the full gradient
oracle and an O(T ) calls to the stochastic oracle, the proposed mixed optimization
algorithm is able to achieve an optimization error of O(1/T ).
1
Introduction
Many machine learning algorithms follow the framework of empirical risk minimization, which
often can be cast into the following generic optimization problem
n
1?
(1)
gi (w),
min G(w) :=
w?W
n i=1
where n is the number of training examples, gi (w) encodes the loss function related to the ith
training example (xi , yi ), and W is a bounded convex domain that is introduced to regularize
the solution w ? W (i.e., the smaller the size of W, the stronger the regularization is). In this
study, we focus on the learning problems for which the loss function gi (w) is smooth. Examples of
smooth loss functions include least square with gi (w) = (yi ??w, xi ?)2 and logistic regression with
gi (w) = log (1 + exp(?yi ?w, xi ?)). Since the regularization is enforced through the restricted domain W, we did not introduce a ?2 regularizer ??w?2 /2 into the optimization problem and as a
result, we do not assume the loss function to be strongly convex. We note that a small ?2 regularizer
does NOT improve the convergence rate of stochastic optimization. More specifically, the
? convergence rate?for stochastically optimizing a ?2 regularized loss function remains as O(1/ T ) when
? = O(1/ T ) [11, Theorem 1], a scenario that is often encountered in real-world applications.
A preliminary approach for solving the optimization problem in (1) is the batch gradient descent
(GD) algorithm [16]. It starts with some initial point, and iteratively updates the solution using the
equation wt+1 = ?W (wt ? ??G(wt )), where ?W (?) is the orthogonal projection onto the convex
domain W. It has been shown that for smooth objective functions, the convergence rate of standard
GD is O(1/T ) [16], and can be improved to O(1/T 2 ) by an accelerated GD algorithm [15, 16, 18].
The main shortcoming of GD method is its high cost in computing the full gradient ?G(wt ) when
the number of training examples is large. Stochastic gradient descent (SGD) [3, 13, 21] alleviates
this limitation of GD by sampling one (or a small set of) examples and computing a stochastic
(sub)gradient at each iteration based on the sampled examples. Since the computational cost of
SGD per iteration is independent of the size of the data (i.e., n), it is usually appealing for largescale learning and optimization.
While SGD enjoys a high computational efficiency per iteration, it suffers from a slow convergence
rate for optimizing smooth functions. It has been shown in [14] that the effect of the stochastic noise
1
Full (GD)
Os
?1 1
0
T
Setting
Lipschitz
Convergence
Smooth
1
T2
0
Stochastic (SGD)
Os Of
?1
T
0
T
Of
T
Convergence
T
?1
T
T
0
Mixed Optimization
Os
Of
?
?
?
Convergence
1
T
T
log T
Table 1: The convergence rate (O), number of calls to stochastic oracle (Os ), and number of calls to
full gradient oracle (Of ) for optimizing Lipschitz continuous and smooth convex functions, using
full GD, SGD, and mixed optimization methods, measured in the number of iterations T .
?
cannot be decreased with a better rate than O(1/ T ) which is significantly worse than GD that uses
the full gradients for updating the solutions and this limitation is also valid when the target function
is smooth. In addition, as we can see from Table 1, for general Lipschitz functions, SGD exhibits
the same convergence rate as that for the smooth functions, implying that smoothness is essentially
not very useful and can not be exploited in stochastic optimization. The slow convergence rate for
stochastically optimizing smooth loss functions is mostly due to the variance in stochastic gradients:
unlike the full gradient case where the norm of a gradient approaches to zero when the solution is
approaching to the optimal solution, in stochastic optimization, the norm of a stochastic gradient
is constant even when the solution is close to the
? optimal solution. It is the variance in stochastic
gradients that makes the convergence rate O(1/ T ) unimprovable in smooth setting [14, 1].
In this study, we are interested in designing an efficient algorithm that is in the same spirit of SGD
but can effectively leverage the smoothness of the loss function to achieve a significantly faster
convergence rate. To this end, we consider a new setup for optimization that allows us to interplay
between stochastic and deterministic gradient descent methods. In particular, we assume that the
optimization algorithm has an access to two oracles:
? A stochastic oracle Os that returns the loss function gi (w) and its gradient based on the
sampled training example (xi , yi ) 2 , and
? A full gradient oracle Of that returns the gradient ?G(w) for any given solution w ? W.
We refer to this new setting as mixed optimization in order to distinguish it from both stochastic and
full gradient optimization models. The key question we examined in this study is:
Is it possible to improve the convergence rate for stochastic optimization of smooth
functions by having a small number of calls to the full gradient oracle Of ?
We give an affirmative answer to this question. We show that with an additional O(ln T ) accesses
to the full gradient oracle Of , the proposed algorithm, referred to as M IXED G RAD, can improve
the convergence rate for stochastic optimization of smooth functions to O(1/T ), the same rate for
stochastically optimizing a strongly convex function [11, 19, 23]. M IXED G RAD builds off on multistage methods [11] and operates in epochs, but involves novel ingredients so as to obtain an O(1/T )
rate for smooth losses. In particular, we form a sequence of strongly convex objective functions to
be optimized at each epoch and decrease the amount of regularization and shrink the domain as the
algorithm proceeds. The full gradient oracle Of is only called at the beginning of each epoch.
Finally, we would like to distinguish mixed optimization from hybrid methods that use growing
sample-sizes as optimization method proceeds to gradually transform the iterates into the full gradient method [9] and batch gradient with varying sample sizes [6], which unfortunately make the
iterations to be dependent to the sample size n as opposed to SGD. In contrast, M IXED G RAD is as
an alternation of deterministic and stochastic gradient steps, with different of frequencies for each
type of steps. Our result for mixed optimization is useful for the scenario when the full gradient
of the objective function can be computed relatively efficient although it is still significantly more
expensive than computing a stochastic gradient. An example of such a scenario is distributed computing where the computation of full gradients can be speeded up by having it run in parallel on
many machines with each machine containing a relatively small subset of the entire training data.
Of course, the latency due to the communication between machines will result in an additional cost
for computing the full gradient in a distributed fashion.
Outline The rest of this paper is organized as follows. We begin in Section 2 by briefly reviewing
the literature on deterministic and stochastic optimization. In Section 3, we introduce the necessary
definitions and discuss the assumptions that underlie our analysis. Section 4 describes the M IXED G RAD algorithm and states the main result on its convergence rate. The proof of main result is given
in Section 5. Finally, Section 6 concludes the paper and discusses few open questions.
1
The convergence rate can be improved to O(1/T ) when the structure of the objective function is provided.
We note that the stochastic oracle assumed in our study is slightly stronger than the stochastic gradient
oracle as it returns the sampled function instead of the stochastic gradient.
2
2
2
More Related Work
Deterministic Smooth Optimization The convergence rate of gradient based methods usually
depends on the analytical properties of the objective function to be optimized. When the objective
function is strongly convex and smooth, it is well known that a simple GD method can achieve a
linear convergence rate [5]. For a ?
non-smooth Lipschitz-continuous
function, the optimal rate for
?
the first order method is only O(1/ T ) [16]. Although O(1/ T ) rate is not improvable in general,
several recent studies are able to improve this rate to O(1/T ) by exploiting the special structure
of the objective function [18, 17]. In the full gradient based convex optimization, smoothness is
a highly desirable property. It has been shown that a simple GD achieves a convergence rate of
O(1/T ) when the objective function is smooth, which is further can be improved to O(1/T 2 ) by
using the accelerated gradient methods [15, 18, 16].
Stochastic Smooth Optimization Unlike the optimization methods based on full gradients, the
smoothness assumption was ?
not exploited by most stochastic optimization methods. In fact, it was
shown in [14] that the O(1/ T ) convergence rate for stochastic optimization cannot be improved
even when the objective function is smooth. This classical result is further confirmed by the recent
studies of composite bounds for the first order optimization methods [2, 12]. The smoothness of
the objective function is exploited extensively in mini-batch stochastic optimization [7, 8], where
the goal is not to improve the convergence rate but to reduce the variance in stochastic gradients
and consequentially the number of times for updating the solutions [24]. We finally note that the
smoothness assumption coupled with the strong convexity of function is beneficial in stochastic
setting and yields a geometric convergence in expectation using Stochastic Average Gradient (SAG)
and Stochastic Dual Coordinate Ascent (SDCA) algorithms proposed in [20] and [22], respectively.
3 Preliminaries
We use bold-face letters to denote vectors. For any two vectors w, w? ? W, we denote by ?w, w? ?
the inner product between w and w? . Throughout this paper, we only consider the ?2 -norm. We
assume the objective function G(w) defined in (1) to be the average of n convex loss functions. The
same assumption was made in [20, 22]. We assume that G(w) is minimized at some w? ? W. Without loss of generality, we assume that W ? BR , a ball of radius R. Besides convexity of individual
functions, we will also assume that each gi (w) is ?-smooth as formally defined below [16].
Definition 1 (Smoothness). A differentiable loss function f (w) is said to be ?-smooth with respect
to a norm ? ? ?, if it holds that
?
f (w) ? f (w? ) + ??f (w? ), w ? w? ? + ?w ? w? ?2 , ? w, w? ? W,
2
The smoothness assumption also implies that ??f (w) ? ?f (w? ), w ? w? ? ? ??w ? w? ?2 which
is equivalent to ?f (w) being ?-Lipschitz continuous.
In stochastic first-order optimization setting, instead of having direct access to G(w), we only have
access to a stochastic gradient oracle, which given a solution w ? W, returns the gradient ?gi (w)
where i is sampled uniformly at random from {1, 2, ? ? ? , n}. The goal of stochastic optimization
? ? W such that the optimization
to use a bounded number T of oracle calls, and compute some w
? ? G(w? ), is as small as possible.
error, G(w)
In the mixed optimization model considered in this study, we first relax the stochastic oracle Os by
assuming that it will return a randomly sampled loss function gi (w), instead of the gradient ?gi (w)
for a given solution w 3 . Second, we assume that the learner also has an access to the full gradient
oracle Of . Our goal is to significantly improve the convergence rate of stochastic gradient descent
(SGD) by making a small number of calls to the full gradient oracle Of . In particular, we show that
by having only O(log T ) accesses to the full gradient oracle and O(T ) accesses to the stochastic
oracle, we can tolerate the noise in stochastic gradients and attain an O(1/T ) convergence rate for
optimizing smooth functions.
3
The audience may feel that this relaxation of stochastic oracle could provide significantly more information, and second order methods such as Online Newton [10] may be applied to achieve O(1/T ) convergence.
We note (i) the proposed algorithm is a first order method, and (ii) although the Online
?Newton method yields a
regret bound of O(1/T ), its convergence rate for optimization can be as low as O(1/ T ) due to the concentration bound for Martingales. In addition, the Online Newton method is only applicable to exponential concave
function, not any smooth loss function.
3
Algorithm 1 M IXED G RAD
Input: step size ?1 , domain size ?1 , the number of iterations T1 for the first epoch, the number of
epoches m, regularization parameter ?1 , and shrinking parameter ? > 1
?1 = 0
1: Initialize w
2: for k = 1, . . . , m do
3:
Construct the domain Wk = {w : w + wk ? W, ?w? ? ?k }
? k)
4:
Call the full gradient oracle Of for ?G(w
?
? k + ?G(w
? k ) = ?k w
? k + n1 ni=1 ?gi (w
? k)
5:
Compute gk = ?k w
6:
Initialize wk1 = 0
7:
for t = 1, . . . , Tk do
8:
Call stochastic oracle Os to return a randomly selected loss function gitk (w)
?kt = gk + ?gitk (wkt + w
? k ) ? ?gitk (w
? k)
9:
Compute the stochastic gradient as g
10:
Update the solution by
1
?kt + ?k wkt ? + ?w ? wkt ?2
wkt+1 = arg max ?k ?w ? wkt , g
2
w?Wk
11:
end for
?
k +1
? k+1 = w
?k + w
e k+1
e k+1 = Tk1+1 Tt=1
12:
Set w
wkt and w
13:
Set ?k+1 = ?k /?, ?k+1 = ?k /?, ?k+1 = ?k /?, and Tk+1 = ? 2 Tk
14: end for
? m+1
Return w
The analysis of the proposed algorithm relies on the strong convexity of intermediate loss functions
introduced to facilitate the optimization as given below.
Definition 2 (Strong convexity). A function f (w) is said to be ?-strongly convex w.r.t a norm ? ? ?,
if there exists a constant ? > 0 (often called the modulus of strong convexity) such that it holds
?
f (w) ? f (w? ) + ??f (w? ), w ? w? ? + ?w ? w? ?2 , ? w, w? ? W
2
4
Mixed Stochastic/Deterministic Gradient Descent
We now turn to describe the proposed mixed optimization algorithm and state its convergence rate.
The detailed steps of M IXED G RAD algorithm are shown in Algorithm 1. It follows the epoch
gradient descent algorithm proposed in [11] for stochastically minimizing strongly convex functions
and divides the optimization process into m epochs, but involves novel ingredients so as to obtain an
O(1/T ) convergence rate. The key idea is to introduce a ?2 regularizer into the objective function to
make it strongly convex, and gradually reduce the amount of regularization over the epochs. We also
shrink the domain as the algorithm proceeds. We note that reducing the amount of regularization
over time is closely-related to the classic proximal-point algorithms. Throughout the paper, we will
use the subscript for the index of each epoch, and the superscript for the index of iterations within
each epoch. Below, we describe the key idea behind M IXED G RAD.
? k be the solution obtained before the kth epoch, which is initialized to be 0 for the first epoch.
Let w
? k , resulting in the following
Instead of searching for w? at the kth epoch, our goal is to find w? ? w
optimization problem for the kth epoch
n
1?
?k
? k ?2 +
? k ),
?w + w
(2)
gi (w + w
min
2
n i=1
w + wk ? W
?w? ? ?k
where ?k specifies the domain size of w and ?k is the regularization parameter introduced at the
kth epoch. By introducing the ?2 regularizer, the objective function in (2) becomes strongly convex,
making it possible to exploit the technique for stochastic optimization of strongly convex function
in order to improve the convergence rate. The domain size ?k and the regularization parameter ?k
are initialized to be ?1 > 0 and ?1 > 0, respectively, and are reduced by a constant factor ? > 1
? k ?2 /2
every epoch, i.e., ?k = ?1 /? k?1 and ?k = ?1 /? k?1 . By removing the constant term ?k ?w
from the objective function in (2), we obtain the following optimization problem for the kth epoch
[
]
n
1?
?k
2
? k? +
? k) ,
?w? + ?k ?w, w
gi (w + w
(3)
min Fk (w) =
w?Wk
2
n i=1
4
where Wk = {w : w + wk ? W, ?w? ? ?k }. We rewrite the objective function Fk (w) as
n
?k
1?
? k? +
? k)
Fk (w) =
?w?2 + ?k ?w, w
gi (w + w
2
n i=1
?
?
n
n
?k
1?
1?
2
?k +
? k) +
? k ) ? ?w, ?gi (w
? k )?
=
?w? + w, ?k w
?gi (w
gi (w + w
2
n i=1
n i=1
?k
1? k
?w?2 + ?w, gk ? +
gb (w)
2
n i=1 i
n
=
where
(4)
1?
? k ) and gbik (w) = gi (w + w
? k ) ? ?w, ?gi (w
? k )?.
?gi (w
n i=1
n
?k +
gk = ?k w
The main reason for using gbik (w) instead of gi (w) is to tolerate the variance in the stochastic gradients. To see this, from the smoothness assumption of gi (w) we obtain the following inequality for
the norm of gbik (w) as:
k
?b
? k ) ? ?gi (w
? k )? ? ??w?.
gi (w)
= ??gi (w + w
As a result, since ?w? ? ?k and ?k shrinks over epochs, then ?w? will approach to zero over
epochs and consequentially ??b
gik (w)? approaches to zero, which allows us to effectively control
the variance in stochastic gradients, a key to improving the convergence of stochastic optimization
for smooth functions to O(1/T ).
Using Fk (w) in (4), at the tth iteration of the kth epoch, we call the stochastic oracle Os to randomly
select a loss function gikt (w) and update the solution by following the standard paradigm of SGD by
wkt+1
=
=
)
(
gikt (wkt ))
?w?Wk wkt ? ?k (?k wkt + gk + ?b
k
(
)
t
t
? k ) ? ?gitk (w
? k )) ,
?w?Wk wk ? ?k (?k wk + gk + ?gitk (wkt + w
(5)
where ?w?Wk (w) projects the solution w into the domain Wk that shrinks over epochs.
e k , and update the solution from w
? k to
At the end of each epoch, we compute the average solution w
? k+1 = w
?k +w
e k . Similar to the epoch gradient descent algorithm [11], we increase the number of
w
iterations by a constant ? 2 for every epoch, i.e. Tk = T1 ? 2(k?1) .
In order to perform stochastic gradient updating given in (5), we need to compute vector gk at the
beginning of the kth epoch, which requires an access to the full gradient oracle Of . It is easy to
count that the number of accesses to the full gradient oracle Of is m, and the number of accesses to
the stochastic oracle Os is
m
?
? 2m ? 1
T1 .
T = T1
? 2(i?1) = 2
? ?1
i=1
Thus, if the total number of accesses to the stochastic gradient oracle is T , the number of access to
the full gradient oracle required by Algorithm 1 is O(ln T ), consistent with our goal of making a
small number of calls to the full gradient oracle.
The theorem below shows that for smooth objective functions, by having O(ln T ) access to the
full gradient oracle Of and O(T ) access to the stochastic oracle Os , by running M IXED G RAD
algorithm, we achieve an optimization error of O(1/T ).
Theorem 1. Let ? ? e?9/2 be the failure probability. Set ? = 2, ?1 = 16? and
1
m
?
T1 = 300 ln , ?1 =
, and ?1 = R.
?
2? 3T1
( 2m
)
? m+1 be the solution returned by Algorithm 1 after m epochs
Define T = T1 2 ? 1 /3. Let w
with m = O(ln T ) calls to the full gradient oracle Of and T calls to the stochastic oracle Os . Then,
with a probability 1 ? 2?, we have
( )
?
80?R2
? m+1 ) ? min G(w) ? 2m?2 = O
.
G(w
w?W
2
T
5
5
Convergence Analysis
Now we turn to proving the main theorem. The proof will be given in a series of lemmas and
theorems where the proof of few are given in the Appendix. The proof of main theorem is based
b ?k be the optimal solution that minimizes Fk (w) defined in (3).
on induction. To this end, let w
b ?k ? ? ?k , with a high probability, it holds that
The key to our analysis is show that when ?w
b ?k+1 ? ? ?k /?, where w
b ?k+1 is the optimal solution that minimizes Fk+1 (w), as revealed by the
?w
following theorem.
b ?k and w
b ?k+1 be the optimal solutions that minimize Fk (w) and Fk+1 (w), reTheorem 2. Let w
e k+1 be the average solution obtained at the end of (kth?epoch) of M IXED G RAD
spectively, and w
b ?k ? ? ?k . By setting the step size ?k = 1/ 2? 3Tk , we have, with a
algorithm. Suppose ?w
probability 1 ? 2?,
?k
?k ?2k
b ?k+1 ? ?
e k+1 ) ? min Fk (w) ?
?w
and Fk (w
w
?
2? 4
?9/2
provided that ? ? e
and
300? 8 ? 2 1
Tk ?
ln .
?2k
?
Taking this statement as given for the moment, we proceed with the proof of Theorem 1, returning
later to establish the claim stated in Theorem 2.
Proof of Theorem 1. It is easy to check that for the first epoch, using the fact W ? BR , we have
?w?1 ? = ?w? ? ? R := ?1 .
m
b ?m+1 be the optimal solution obLet w? be the optimal solution that minimizes Fm (w) and let w
tained in the last epoch. Using Theorem 2, with a probability 1 ? 2m?, we have
?1
?m ?2m
?1 ?21
b ?m ? ? m?1 , Fm (w
e m+1 ) ? Fm (w
b ?m ) ?
?w
= 3m+1
4
?
2?
2?
Hence by expanding the left hand side and utilizing the smoothness of individual loss functions we
get
n
1?
?1 ?21
?1
? m+1 ) ? Fm (w
b ?m ) + 3m+1
e m+1 , w
? m?
gi (w
? m?1 ?w
n i=1
2?
?
? m ??1
?1 ?21
?1 ?w
+
3m+1
2m?2
2?
?
b ?m+1 ? ? ?m = ?1 ? 1?m . Since
where the last step uses the fact ?w
m
m
?
?
??1
? m? ?
e i| ?
?w
|w
?i ?
? 2?1
??1
i=1
i=1
b ?m ) +
? Fm (w
where in the last step holds under the condition ? ? 2. By combining above inequalities, we obtain
n
?1 ?21
1?
2?1 ?2
? m+1 ) ? Fm (w
b ?m ) + 3m+1
gi (w
+ 2m?21 .
n i=1
2?
?
b ?m minimizes Fm (w), for any w? ?
Our final goal is to relate Fm (w) to minw G(w). Since w
arg min G(w), we have
n
)
1?
?1 (
? m ?2 + 2?w? ? w
? m, w
? m? .
Fm (w?m ) ? Fm (w? ) =
(6)
gi (w? ) + m?1 ?w? ? w
n i=1
2?
? m ?. To this end, after the first m
Thus, the key to bound |F(w?m ) ? G(w? )| is to bound ?w? ? w
? m+1 , w
? m+2 , . . . be the sequence of solutions
epoches, we run Algorithm 1 with full gradients. Let w
generated by Algorithm 1 after the first m epochs. For this sequence of solutions, Theorem 2
e k ? ? ?k for any
will hold deterministically as we deploy the full gradient for updating, i.e., ?w
k ? m + 1. Since we reduce ?k exponentially, ?k will approach to zero and therefore the sequence
? k }?
{w
k=m+1 will converge to w? , one of the optimal solutions that minimize G(w). Since w? is the
? k }?
? k ? ? ?k for any k ? m + 1, we have
limit of sequence {w
k=m+1 and ?w
?
?
?
?
2?1
?1
? m? ?
e i| ?
? m
?w? ? w
|w
?k ? m
?1 )
?
(1
?
?
?
i=m+1
k=m+1
6
where the last step follows from the condition ? ? 2. Thus,
( 2
)
n
1?
4?1
8?21
?1
m
Fm (w? ) ?
gi (w? ) + m?1
+ m
n i=1
2?
? 2m
?
=
n
n
)
1?
2?1 ?2 (
1?
5?1 ?2
gi (w? ) + 2m?11 2 + ? ?m ?
gi (w? ) + 2m?11
n i=1
?
n i=1
?
(7)
By combining the bounds in (6) and (7), we have, with a probability 1 ? 2m?,
n
n
1?
1?
5?1 ?2
? m+1 ) ?
gi (w
gi (w? ) ? 2m?21 = O(1/T )
n i=1
n i=1
?
where
T = T1
m?1
?
k=0
?
2k
(
)
T1 ? 2m ? 1
T1 2m
?
? .
=
2
? ?1
3
We complete the proof by plugging in the stated values for ?, ?1 and ?1 .
5.1
Proof of Theorem 2
For the convenience of discussion, we drop the subscript k for epoch just to simplify our notation.
? =w
? k be the solution obtained before the start of
Let ? = ?k , T = Tk , ? = ?k , g = gk . Let w
?? = w
? k+1 be the solution obtained after running through the kth epoch. We
the epoch k, and let w
denote by F(w) and F ? (w) the objective functions Fk (w) and Fk+1 (w). They are given by
n
1?
?
? +
?
?w?2 + ??w, w?
gi (w + w)
(8)
F(w) =
2
n i=1
?
?
1?
? ?)
? ?? +
gi (w + w
?w?2 + ?w, w
2?
?
n i=1
n
F ? (w)
=
(9)
b? = w
b ?k and w
b ?? = w
b ?k+1 be the optimal solutions that minimize F(w) and F ? (w) over the
Let w
b ? ? ? ?, our goal is to show
domain Wk and Wk+1 , respectively. Under the assumption that ?w
?
??2
b ?? ? ? , F(w
? ? ) ? F (w
b ?) ?
?w
?
2? 4
b ? ) where the proof is deferred to Appendix.
The following lemma bounds F(wt ) ? F (w
Lemma 1.
b ? ?2
b ? ?2
?wt ? w
?wt+1 ? w
?
2
b ?) ?
F(wt ) ? F (w
?
+ ??b
git (wt ) + ?wt ? + ?g, wt ? wt+1 ?
2?
2?
2
?
? ?
?
b w
b w
b t ), wt ? w
b ? ) ? ?b
b ? ), wt ? w
b ? + ??b
b ? ) ? ?F(
b ? ) + ?F(w
b?
+ ?F(
git (w
git (wt ) + ?b
git (w
? 1 = 0, we have
By adding the inequality in Lemma 1 over all iterations, using the fact w
T
?
b ? ?2
b ? ?2
?w
?wT +1 ? w
b ?) ?
F(wt ) ? F(w
?
? ?g, wT +1 ?
2?
2?
t=1
+
T
T
?
??
b w
b ? ) ? ?b
b ? ), wt ? w
b ??
??b
git (wt ) + ?wt ?2 +
??F(
git (w
2 t=1
t=1
{z
} |
{z
}
|
:=AT
+
T ?
?
:=BT
?
b w
b t ), wt ? w
b ? ) ? ?F(
b ? ) + ?F(w
b? .
??b
git (wt ) + ?b
git (w
t=1
|
{z
}
:=CT
Since g = ?F(0) and
F(wT +1 ) ? F (0) ? ??F(0), wT +1 ? +
?
?
?wT +1 ?2 = ?g, wT +1 ? + ?wT +1 ?2
2
2
7
using the fact F(0) ? F(w? ) + ?2 ?w? ?2 and max(?w? ?, ?wT +1 ?) ? ?, we have
?
b ? ))
??g, wT +1 ? ? F(0) ? F (wT +1 ) + ?2 ? ??2 ? (F(wT +1 ) ? F(w
2
and therefore
(
)
T
+1
?
1
?
2
b ?) ? ?
F(wt ) ? F (w
+ ? + AT + BT + CT .
2?
2
t=1
(10)
The following lemmas bound AT , BT and CT .
Lemma 2. For AT defined above we have AT ? 6? 2 ?2 T .
The following lemma upper bounds BT and CT . The proof is based on the Bernstein?s inequality
for Martingales [4] and is given in the Appendix.
Lemma 3. With a probability 1 ? 2?, we have
)
)
(
(
?
?
1
1
1
1
BT ? ??2 ln + 2T ln
and CT ? 2??2 ln + 2T ln
.
?
?
?
?
Using Lemmas 2 and 3, by substituting the uppers bounds for AT , BT , and CT in (10), with a
probability 1 ? 2?, we obtain
(
)
?
T
+1
?
1
1
1
2
2
b ?) ? ?
F(wt ) ? F(w
+ ? + 6? ?T + 3? ln + 3? 2T ln
2?
?
?
t=1
?
By choosing ? = 1/[2? 3T ], we have
)
(
?
T
+1
?
?
1
1
2
b ? ) ? ? 2? 3T + ? + 3? ln + 3? 2T ln
F(wt ) ? F(w
?
?
t=1
?T +1
e = i=1 wt /(T + 1), we have
and using the fact w
?
?
3 ln[1/?]
3 ln[1/?]
2 5?
2
2
2 5?
b
?
?
e ? F (w
b ?) ? ?
e ?w
b ?? ? ?
F(w)
, and ? = ?w
.
T +1
? T +1
Thus, when T ? [300? 8 ? 2 ln 1? ]/?2 , we have, with a probability 1 ? 2?,
2
?
b 2 ? ? , and |F(w)
e ? F (w
b ? )| ? 4 ?2 .
?
(11)
4
?
2?
e ?w
b ? ?.
b ?? ? to ?w
The next lemma relates ?w
?
b ? ? ? ??w
e ?w
b ? ?.
Lemma 4. We have ?w
b ?? ? ? ?/?.
Combining the bound in (11) with Lemma 4, we have ?w
6
Conclusions and Open Questions
We presented a new paradigm of optimization, termed as mixed optimization, that aims to improve
the convergence rate of stochastic optimization by making a small number of calls to the full gradient
oracle. We proposed the M IXED G RAD algorithm and showed that it is able to achieve an O(1/T )
convergence rate by accessing stochastic and full gradient oracles for O(T ) and O(log T ) times,
respectively. We showed that the M IXED G RAD algorithm is able to exploit the smoothness of the
function, which is believed to be not very useful in stochastic optimization.
In the future, we would like to examine the optimality of our algorithm, namely if it is possible
to achieve a better convergence rate for stochastic optimization of smooth functions using O(ln T )
accesses to the full gradient oracle. Furthermore, to alleviate the computational cost caused by
O(log T ) accesses to the full gradient oracle, it would be interesting to empirically evaluate the
proposed algorithm in a distributed framework by distributing the individual functions among processors to parallelize the full gradient computation at the beginning of each epoch which requires
O(log T ) communications between the processors in total. Lastly, it is very interesting to check
whether an O(1/T 2 ) rate could be achieved by an accelerated method in the mixed optimization
scenario, and whether linear convergence rates could be achieved in the strongly-convex case.
Acknowledgments. The authors would like to thank the anonymous reviewers for their helpful and insightful comments. This work was supported in part by ONR Award N000141210431 and NSF (IIS-1251031).
8
References
[1] A. Agarwal, P. L. Bartlett, P. D. Ravikumar, and M. J. Wainwright. Information-theoretic
lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions
on Information Theory, 58(5):3235?3249, 2012.
[2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for
convex optimization. Oper. Res. Lett., 31(3):167?175, 2003.
[3] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In NIPS, pages 161?168,
2008.
[4] S. Boucheron, G. Lugosi, and O. Bousquet. Concentration inequalities. In Advanced Lectures
on Machine Learning, pages 208?240, 2003.
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[6] R. H. Byrd, G. M. Chin, J. Nocedal, and Y. Wu. Sample size selection in optimization methods
for machine learning. Mathematical programming, 134(1):127?155, 2012.
[7] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated
gradient methods. In NIPS, pages 1647?1655, 2011.
[8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction
using mini-batches. The Journal of Machine Learning Research, 13:165?202, 2012.
[9] M. P. Friedlander and M. Schmidt. Hybrid deterministic-stochastic methods for data fitting.
SIAM Journal on Scientific Computing, 34(3):A1380?A1405, 2012.
[10] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[11] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for
stochastic strongly-convex optimization. Journal of Machine Learning Research - Proceedings
Track, 19:421?436, 2011.
[12] Q. Lin, X. Chen, and J. Pena. A smoothing stochastic gradient method for composite optimization. arXiv preprint arXiv:1008.5204, 2010.
[13] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM J. on Optimization, 19:1574?1609, 2009.
[14] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method efficiency in optimization.
1983.
[15] Y. Nesterov. A method of solving a convex programming problem with convergence rate o
(1/k2). In Soviet Mathematics Doklady, volume 27, pages 372?376, 1983.
[16] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004.
[17] Y. Nesterov. Excessive gap technique in nonsmooth convex minimization. SIAM Journal on
Optimization, 16(1):235?249, 2005.
[18] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127?
152, 2005.
[19] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex
stochastic optimization. In ICML, 2012.
[20] N. L. Roux, M. W. Schmidt, and F. Bach. A stochastic gradient method with an exponential
convergence rate for finite training sets. In NIPS, pages 2672?2680, 2012.
[21] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML, pages 807?814, 2007.
[22] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized
loss minimization. JMLR, 14:567599, 2013.
[23] O. Shamir and T. Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. ICML, 2013.
[24] L. Zhang, T. Yang, R. Jin, and X. He. O(logt) projections for stochastic optimization of smooth
and strongly convex functions. ICML, 2013.
9
| 4941 |@word briefly:1 stronger:2 norm:6 dekel:1 open:2 git:8 nemirovsky:1 sgd:10 moment:1 initial:1 series:1 drop:1 update:4 juditsky:1 implying:1 selected:1 beginning:3 ith:1 iterates:1 math:1 zhang:4 mathematical:1 direct:1 fitting:1 introductory:1 introduce:3 examine:1 growing:1 byrd:1 solver:1 becomes:1 begin:1 provided:2 bounded:2 project:1 notation:1 spectively:1 minimizes:4 affirmative:1 every:2 concave:1 sag:1 returning:1 k2:1 doklady:1 control:1 underlie:1 t1:10 before:2 engineering:1 limit:1 parallelize:1 subscript:2 lugosi:1 examined:1 nemirovski:1 speeded:1 acknowledgment:1 regret:3 n000141210431:1 sdca:1 empirical:1 significantly:6 composite:2 projection:2 attain:1 boyd:1 get:1 onto:1 pegasos:1 selection:1 close:1 cannot:2 convenience:1 risk:1 lijun:1 equivalent:1 deterministic:6 reviewer:1 kale:2 convex:26 bachrach:1 roux:1 utilizing:1 regularize:1 vandenberghe:1 classic:1 searching:1 proving:1 coordinate:2 feel:1 target:1 suppose:1 deploy:1 shamir:4 programming:3 us:2 designing:1 expensive:1 updating:4 preprint:1 decrease:1 accessing:1 convexity:5 complexity:2 multistage:1 nesterov:4 solving:2 reviewing:1 rewrite:1 efficiency:2 learner:1 regularizer:4 soviet:1 shortcoming:1 describe:2 choosing:1 shalev:2 ixed:11 relax:1 gi:35 transform:1 superscript:1 online:5 final:1 interplay:1 sequence:5 differentiable:1 analytical:1 product:1 combining:3 alleviates:1 achieve:7 exploiting:1 convergence:41 tk:7 measured:1 strong:4 involves:2 implies:1 radius:1 closely:1 stochastic:71 preliminary:2 alleviate:1 anonymous:1 rong:1 hold:5 considered:1 exp:1 claim:1 substituting:1 achieves:1 applicable:1 cotter:1 minimization:5 aim:1 tained:1 varying:1 focus:1 check:2 contrast:2 helpful:1 dependent:1 entire:1 bt:6 interested:1 arg:2 dual:2 among:1 smoothing:1 special:1 initialize:2 a1380:1 construct:1 having:6 sampling:1 icml:4 excessive:1 future:1 minimized:1 t2:1 nonsmooth:1 simplify:1 few:2 randomly:3 individual:3 beck:1 n1:1 unimprovable:1 highly:1 a1405:1 deferred:1 behind:1 primal:1 kt:2 necessary:1 minw:1 orthogonal:1 divide:1 initialized:2 re:1 teboulle:1 cost:4 introducing:1 subset:1 answer:1 proximal:1 gd:10 siam:3 off:1 opposed:1 containing:1 worse:1 stochastically:4 return:7 oper:1 mahdavi:1 bold:1 wk:15 caused:1 depends:1 later:1 hazan:2 start:2 parallel:1 minimize:3 square:1 ni:1 improvable:1 variance:5 tk1:1 yield:3 confirmed:1 processor:2 suffers:1 definition:3 failure:1 frequency:1 proof:10 mi:1 sampled:5 organized:1 tolerate:2 follow:1 improved:4 shrink:4 strongly:13 generality:1 furthermore:1 just:1 lastly:1 hand:1 o:11 nonlinear:1 logistic:1 scientific:1 modulus:1 facilitate:1 effect:1 usa:1 regularization:8 hence:1 boucheron:1 iteratively:1 chin:1 outline:1 tt:1 complete:1 theoretic:1 novel:2 empirically:1 exponentially:1 volume:1 he:1 pena:1 kluwer:1 refer:1 cambridge:1 smoothness:11 fk:12 mathematics:1 access:18 recent:2 showed:2 optimizing:8 termed:2 scenario:4 inequality:5 onr:1 alternation:1 yi:4 exploited:3 additional:3 converge:1 paradigm:2 ii:2 relates:1 full:38 desirable:1 smooth:35 faster:1 academic:1 bach:1 believed:1 lin:1 ravikumar:1 award:1 plugging:1 prediction:1 regression:1 basic:1 essentially:1 expectation:1 arxiv:2 iteration:10 agarwal:2 gilad:1 achieved:2 audience:1 addition:2 decreased:1 publisher:1 rest:1 unlike:2 ascent:2 wkt:11 comment:1 spirit:1 sridharan:2 call:14 leverage:1 yang:1 intermediate:1 revealed:1 easy:2 bernstein:1 approaching:1 fm:11 reduce:3 inner:1 idea:2 br:2 tradeoff:1 whether:2 bartlett:1 gb:1 distributing:1 returned:1 proceed:1 useful:3 latency:1 detailed:1 amount:3 extensively:1 tth:1 mahdavim:1 reduced:1 specifies:1 shapiro:1 nsf:1 estimated:1 per:2 track:1 key:6 lan:1 nocedal:1 relaxation:1 subgradient:1 enforced:1 run:2 letter:1 throughout:2 wu:1 appendix:3 bound:12 ct:6 distinguish:2 encountered:1 oracle:41 encodes:1 bousquet:2 min:6 optimality:1 relatively:2 department:1 ball:1 logt:1 smaller:1 describes:1 slightly:1 beneficial:1 appealing:1 making:5 restricted:1 gradually:2 ln:19 equation:1 remains:1 discus:2 turn:2 count:1 singer:1 end:7 generic:1 batch:5 schmidt:2 running:2 include:1 newton:3 exploit:2 build:1 establish:1 classical:1 objective:17 question:4 concentration:2 mehrdad:1 said:2 exhibit:1 gradient:74 kth:9 thank:1 reason:1 induction:1 assuming:1 besides:1 index:2 mini:3 minimizing:1 setup:2 mostly:1 unfortunately:1 statement:1 relate:1 gk:8 stated:2 perform:1 upper:2 finite:1 jin:2 descent:10 communication:2 introduced:3 cast:1 required:1 namely:1 optimized:2 rad:11 nip:3 able:4 beyond:1 proceeds:3 usually:2 below:4 program:1 max:2 wainwright:1 hybrid:2 regularized:2 largescale:1 advanced:1 scheme:1 improve:9 concludes:1 coupled:1 epoch:35 literature:1 geometric:1 friedlander:1 loss:19 lecture:2 mixed:13 interesting:2 limitation:2 srebro:2 ingredient:2 consistent:1 xiao:1 course:2 supported:1 last:4 enjoys:1 side:1 face:1 taking:1 barrier:1 distributed:4 lett:1 world:1 valid:1 yudin:1 author:1 made:1 projected:1 transaction:1 consequentially:2 wk1:1 assumed:1 xi:4 shwartz:2 msu:1 continuous:4 table:2 robust:1 expanding:1 rongjin:1 improving:1 bottou:1 domain:11 did:1 main:6 noise:2 referred:1 fashion:1 martingale:2 slow:2 shrinking:1 sub:2 deterministically:1 exponential:2 jmlr:1 theorem:13 removing:1 insightful:1 r2:1 rakhlin:1 svm:1 gik:1 exists:1 adding:1 effectively:2 mirror:1 chen:1 gap:1 michigan:1 logarithmic:1 relies:1 goal:8 lipschitz:6 specifically:1 operates:1 uniformly:1 wt:36 reducing:1 averaging:1 lemma:12 called:2 total:2 formally:1 select:1 accelerated:4 evaluate:1 |
4,356 | 4,942 | Stochastic Convex Optimization with
Multiple Objectives
Mehrdad Mahdavi
Michigan State University
Tianbao Yang
NEC Labs America, Inc
Rong Jin
Michigan State University
[email protected]
[email protected]
[email protected]
Abstract
In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives
and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by
choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based
algorithm which first approximates the stochastic objectives by sampling and
then solves a constrained stochastic optimization problem by projected gradient
method. This method attains a suboptimal convergence rate even under strong
assumption on the objectives. Our second approach is an efficient primal-dual
stochastic algorithm. It leverages on the theory of Lagrangian method ?
in constrained optimization and attains the optimal convergence rate of O(1/ T ) in
high probability for general Lipschitz continuous objectives.
1 Introduction
Although both stochastic optimization [17, 4, 18, 10, 26, 20, 22] and multiple objective optimization [9] are well studied subjects in Operational Research and Machine Learning [11, 12, 24], much
less is developed for stochastic multiple objective optimization, which is the focus of this work.
Unlike multiple objective optimization where we have access to the complete objective functions, in
stochastic multiple objective optimization, only stochastic samples of objective functions are available for optimization. Compared to the standard setup of stochastic optimization, the fundamental
challenge of stochastic multiple objective optimization is how to make appropriate tradeoff between
different objectives given that we only have access to stochastic oracles for different objectives. In
particular, an algorithm for this setting has to ponder conflicting objective functions and accommodate the uncertainty in the objectives.
A simple approach toward stochastic multiple objective optimization is to linearly combine multiple
objectives with a fixed weight assigned to each objective. It converts stochastic multiple objective
optimization into a standard stochastic optimization problem, and is guaranteed to produce Pareto
efficient solutions. The main difficulty with this approach is how to decide an appropriate weight for
each objective, which is particularly challenging when the complete objective functions are unavailable. In this work, we consider an alternative formulation that casts multiple objective optimization
into a constrained optimization problem. More specifically, we choose one of the objectives as the
target to be optimized, and use the rest of the objectives as constraints in order to ensure that each
of these objectives is below a specified level. Our assumption is that although full objective functions are unknown, their desirable levels can be provied due to the prior knowledge of the domain.
Below, we provide a few examples that demonstrate the application of stochastic multiple objective
optimization in the form of stochastic constrained optimization.
Robust Investment.
Let r ? Rn denote random returns of the n risky assets, and w ? W ?
?n
n
{w ? R+ :
i wi = 1} denote the distribution of an investor?s wealth over all assets. The
return for an investment distribution is defined as ?w, r?. The investor needs to consider conflicting
objectives such as rate of return, liquidity and risk in maximizing his wealth [2]. Suppose that r has
a unknown probability distribution with mean vector ? and covariance matrix ?. Then the target
1
of the investor is to choose an optimal portfolio w that lies on the mean-risk efficient frontier. In
mean-variance theory [15], which trades off between the expected return (mean) and risk (variance)
of a portfolio, one is interested in minimizing the variance subject to budget constraints which leads
to a formulation like:
[?
?]
w, E[rr? ]w
subject to E[?r, w?] ? ?.
min
?n
n
w?R+ ,
i
wi =1
Neyman-Pearson Classification. In the Neyman-Pearson (NP) classification paradigm (see
e.g, [19]), the goal is to learn a classifier from labeled training data such that the probability of
a false negative is minimized while the probability of a false positive is below a user-specified level
? ? (0, 1). Let hypothesis class be a parametrized convex set W = {w 7? ?w, x? : w ? Rd , ?w? ?
R} and for all (x, y) ? ? ? Rd ? {?1, +1} the loss function ? : W ? ? 7? R+ be a non-negative
convex function. While the goal of classical binary classification problem is to minimize the risk as
minw?W [L(w) = E [?(w; (x, y))]], the Neyman-Pearson targets on
min L+ (w) subject to L? (w) ? ?,
w?W
where L+ (w) = E[?(w; (x, y))|y = +1] and L? (w) = E[?(w; (x, y))|y = ?1].
Linear Optimization with Stochastic Constraints. In many applications in economics, most
notably in welfare and utility theory, and management parameters are known only stochastically
and it is unreasonable to assume that the objective functions and the solution domain are deterministically fixed. These situations involve the challenging task of pondering both conflicting goals
and random data concerning the uncertain parameters of the problem. Mathematically, the goal in
multi-objective linear programming with stochastic information is to solve:
min [?c1 (?), w? , ? ? ? , ?cK (?), w?] subject to w ? W = {w ? Rd+ : A(?)w ? b(?)},
w
where ? is the randomness in the parameters, ci , i ? [K] are the objective functions, and A and b
formulate the stochastic constraints on the solution where randomness is captured by ?.
In this paper, we first examine two methods that try to eliminate the multi-objective aspect or the
stochastic nature of stochastic multiple objective optimization and reduce the problem to a standard
convex optimization problem. We show that both methods fail to tackle the problem of stochastic
multiple objective optimization in general and require strong assumptions on the stochastic objectives, which limits their applications to real world problems. Having discussed these negative results,
we propose an algorithm that can solve the problem optimally and efficiently. We achieve
? this by
an efficient primal-dual stochastic gradient descent method that is able to attain an O(1/ T ) convergence rate for all the objectives under the standard assumption of the Lipschitz continuity of
objectives which is known to be optimal (see for instance [3]). We note that there is a flurry of research on heuristics-based methods to address the multi-objective stochastic optimization problem
(see e.g., [8] and [1] for a recent survey on existing methods). However, in contrast to this study,
most of these approaches do not have theoretical guarantees.
Finally, we would like to distinguish our work from robust optimization [5] and online learning
with long term constraint [13]. Robust optimization was designed to deal with uncertainty within
the optimization systems. Although it provides a principled framework for dealing with stochastic
constraints, it often ends up with non-convex optimization problems that are not computationally
tractable. Online learning with long term constraint generalizes online learning. Instead of requiring
the constraints to be satisfied by every solution generated by online learning, it allows the constraints
to be satisfied by the entire sequence of solutions. However, unlike stochastic multiple objective
optimization, in online learning with long term constraints, constraint functions are fixed and known
before the start of online learning.
Outline. The remainder of the paper is organized as follows. In Section 2 we establish the necessary
notation and introduce the problem under consideration. Section 3 introduces the problem reduction
methods and elaborates their disadvantages. Section 4 presents our efficient primal-dual stochastic
optimization algorithm. Finally, we conclude the paper with open questions in Section 5.
2
Preliminaries
Notation Throughout this paper, we use the following notation. We use bold-face letters to denote
vectors. We denote the inner product between two vectors w, w? ? W by ?w, w? ? where W ? Rd
is a compact closed domain. For m ? N, we denote by [m] the set {1, 2, ? ? ? {, m}. We only consider
}
the ?2 norm throughout the paper. The ball with radius R is denoted by B = w ? Rd : ?w? ? R .
Statement of the Problem In this work, we generalize online stochastic convex optimization to
the case of multiple objectives. In particular, at each iteration, the learner is asked to present a
2
solution wt , which will be evaluated by multiple loss functions ft0 (w), ft1 (w), . . . , ftm (w). A fundamental difference between single- and multi-objective optimization is that for the latter it is not
obvious how to evaluate the optimization quality. Since it is impossible to simultaneously minimize multiple loss functions and in order to avoid complications caused by handling more than
one objective, we choose one function as the objective and try to bound other objectives by appropriate thresholds. Specifically, the goal of OCO with multiple objectives becomes to minimize
?T
0
t=1 ft (wt ) and at the same time keep the other objective functions below a given threshold, i.e.
T
1? i
f (wt ) ? ?i , i ? [m],
T t=1 t
where w1 , . . . , wT are the solutions generated by the online learner and ?i specifies the level of loss
that is acceptable to the ith objective function. Since the general setup (i.e., full adversarial setup) is
challenging for online convex optimization even with two objectives [14], in this work, we consider
a simple scenario where all the loss functions fti (w), i ? [m] are i.i.d samples from an unknown
distribution [21]. We also note that our goal is NOT to find a Pareto efficient solution (a solution
is Pareto efficient if it is not dominated by any solution in the decision space). Instead, we aim to
find a solution that (i) optimizes one selected objective, and (ii) satisfies all the other objectives with
respect to the specified thresholds.
We denote by f?i (w) = Et [fti (w)], i = 0, 1, . . . , m the expected loss function of sampled function
fti (w). In stochastic multiple objective optimization, we assume that we do not have direct access
to the expected loss functions and the only information available to the solver is through a stochastic
oracle that returns a stochastic realization of the expected loss function at each call. We assume that
there exists a solution w strictly satisfying all the constraints, i.e. f?i (w) < ?i , i ? [m]. We denote
by w? the optimal solution to multiple{objective optimization, i.e., }
(1)
w? = arg min f?0 (w) : f?i (w) ? ?i , i ? [m] .
b T after T trials that (i) obeys all the constraints, i.e.
Our goal is to efficiently compute a solution w
b T ) ? ?i , i ? [m] and (ii) minimizes the objective f?0 with respect to the optimal solution w? ,
f?i (w
b T ) ? f?0 (w? ). For the convenience of discussion, we refer to ft0 (w) and f?0 (w) as the
i.e. f?0 (w
objective function, and to fti (w) and f?i (w) for all i ? [m] as the constraint functions.
Before discussing the algorithms, we first mention a few assumptions made in our analysis. We
assume that the optimal solution w? belongs to B. We also make the standard assumption that
all the loss functions, including both the objective function and constraint functions, are Lipschitz
continuous, i.e., |fti (w) ? fti (w? )| ? L?w ? w? ? for any w, w? ? B.
3
Problem Reduction and its Limitations
Here we examine two algorithms to cope with the complexity of stochastic optimization with multiple objectives and discuss some negative results which motivate the primal-dual algorithm presented
in Section 4. The first method transforms a stochastic multi-objective problem into a stochastic
single-objective optimization problem and then solves the latter problem by any stochastic programming approach. Alternatively, one can eliminate the randomness of the problem by estimating the
stochastic objectives and transform the problem into a deterministic multi-objective problem.
3.1
Linear Scalarization with Stochastic Optimization
A simple approach to solve stochastic optimization problem with multiple objectives is to eliminate
the
aspect of the problem by aggregating the m + 1 objectives into a single objective
?mmulti-objective
i
i=0 ?i ft (wt ), where ?i , i ? {0, 1, ? ? ? , m} is the weight of ith objective, and then solving the
resulting single objective stochastic problem by stochastic optimization methods. This approach
is in general known as the weighted-sum or scalarization method [1]. Although this naive idea
considerably facilitates the computational challenge of the problem, unfortunately, it is difficult to
decide the weight for each objective, such that the specified levels for different objectives are obeyed.
Beyond the hardness of optimally determining the weight of individual functions, it is also unclear
how to bound the sub-optimality of final solution for individual objective functions.
3.2
Projected Gradient Descent with Estimated Objective Functions
The main challenge of the proposed problem is that the expected constraint functions f?i (w) are not
given. Instead, only a sampled function is provided at each trial t. Our naive approach is to replace
the expected constraint function f?i (w) with its empirical estimation based on sampled objective
functions. This approach circumvents the problem of stochastically optimizing multiple objective
3
into the original online convex optimization with complex projections, and therefore can be solved
by projected gradient descent. More specifically, at trial t, given the current solution wt and received
loss functions fti (w), i = 0, 1, . . . , m, we first estimate the constraint functions as
t
1? i
fbti (w) =
fk (w), i ? [m],
t
k=1
(
}
and then update the solution by wt+1 = ?Wt wt ? ??ft0 (wt ) where ? > 0 is the step size,
?W (w) = minz?W ?z ? w? projects a solution w into domain W, and Wt is an approximate
domain given by Wt = {w : fbti (w) ? ?i , i ? [m]}.
One problem with the above approach is that although it is feasible to satisfy all the constraints based
on the true expected constraint functions, there is no guarantee that the approximate domain Wt is
not empty. One way to address this issue is to estimate the expected constraint functions by burning
the first bT trials, where b ? (0, 1) is a constant that needs to be adjusted to obtain the optimal
performance, and keep the estimated constraint functions unchanged afterwards. Given the sampled
i
received in the first bT trials, we compute the approximate domain W ? as
functions f1i , . . . , fbT
bT
{
}
1 ? i
fbi (w) =
ft (w), i ? [m], W ? = w : fbi (w) ? ?i + ??i , i = 1, . . . , m
bT t=1
where ??i > 0 is a relaxed constant introduced to ensure that with a high probability, the approximate
domain Wt is not empty provided that the original domain W is not empty.
To ensure the correctness of the above approach, we need to establish some kind of uniform (strong)
convergence assumption to make sure that the solutions obtained by projection onto the estimated
domain W ? will be close to the true domain W with high probability. It turns out that the following
assumption ensures the desired property.
Assumption 1 (Uniform Convergence). Let fbi (w), i = 0, 1, ? ? ? , m be the estimated functions
obtained by averaging over bT i.i.d samples for f?i (w), i ? [m]. We assume that, with a high
probability,
sup fbi (w) ? f?i (w) ? O([bT ]?q ), i = 0, 1, ? ? ? , m.
w?W
where q > 0 decides the convergence rate.
It is straightforward to show that under Assumption 1, with a high probability, for any w ? W,
we have w ? W ? , with appropriately chosen relaxation constant ??i , i ? [m]. Using the estimated
domain W ? , for trial t ? [bT + 1, T ], we update the solution by wt+1 = ?W ? (wt ? ??ft0 (wt )).
There are however several drawbacks with this naive approach. Since the first bT trials are used for
estimating the constraint functions, only the last (1?b)T trials are used for searching for the optimal
solution. The total amount of violation of individual constraint functions for the last (1 ? b)T trials,
?T
given by t=bT +1 f?i (wt ), is O((1 ? b)b?q T 1?q ), where each of the (1 ? b)T trials receives a
violation of O([bT ]?q ). Similarly, following
? the conventional analysis of online learning [26], we
?T
have t=bT +1 (ft0 (wt ) ? ft0 (w? )) ? O( (1 ? b)T ). Using the same trick as in [13], to obtain
a solution with zero violation of constraints, we will have a regret bound O((1 ? b)b?q T 1?q +
?
(1 ? b)T ), which yields a convergence rate of O(T ?1/2 + T ?q ) which could be worse than
the optimal rate O(T ?1/2 ) when q < 1/2. Additionally, this approach requires memorizing the
constraint functions of the first bT trials. This is in contrast to the typical assumption of online
learning where only the solution is memorized.
Remark 1. We finally remark on the uniform convergence assumption, which holds when the constraint functions are linear [25], but unfortunately does not hold for general convex Lipschitz functions. In particular, one can simply show examples where there is no uniform convergence for
stochastic convex Lipchitz functions in infinite dimensional spaces [21]. Without uniform convergence assumption, the approximate domain W ? may depart from the true W significantly at some
unknown point, which makes the above approach to fail for general convex objectives.
To address these limitations and in particular the dependence on uniform convergence assumption,
we present an algorithm that does not require projection when updating the solution and does not
require to impose any additional assumption on the stochastic functions except for the standard
Lipschitz continuity assumption. We note that our result is closely related to the recent studies of
learning from the viewpoint of optimization [23], which state that solutions found by stochastic
gradient descent can be statistically consistent even when uniform convergence theorem does not
hold.
4
Algorithm 1 Stochastic Primal-Dual Optimization with Multiple Objectives
i
1: INPUT: step size ?, ?0 = (?10 , ? ? ? , ?m
0 ), ?0 > 0, i ? [m] and total iterations T
2: w1 = ?1 = 0
3: for t = 1, . . . , T do
4:
Submit the solution wt
5:
Receive loss functions fti , i = 0, 1, . . . , m
6:
Compute the gradients ?fti (wt ), i = 0, 1, . . . , m
7:
Update the solution w and ? by
])
(
[
m
?
wt+1 = ?B (wt ? ??w Lt (wt , ?t )) = ?B wt ? ? ?ft0 (wt ) +
?it ?fti (wt )
,
i=1
(
[
])
(
)
?it+1 = ?[0,?i0 ] ?it + ???i Lt (wt , ?t ) = ?[0,?i0 ] ?it + ? fti (wt ) ? ?i .
8: end for
?
? T = Tt=1 wt /T
9: Return w
4
An Efficient Stochastic Primal-Dual Algorithm
We now turn to devise a tractable formulation of the problem, followed by an efficient primal-dual
optimization algorithm and the statements of our main results. We show that with a high probability, the solution found by the ?
proposed algorithm will exactly satisfy the expected constraints and
achieves a regret bound of O( T ). The main idea of the proposed algorithm is to design an appropriate objective that combines the loss function f?0 (w) with f?i (w), i ? [m]. As mentioned before,
owing to the presence of conflicting goals and the randomness nature of the objective functions, we
resort to seek for a solution that satisfies all the objectives instead of an optimal one. To this end, we
define the following objective function
m
?
0
?
?
L(w, ?) = f (w) +
?i (f?i (w) ? ?i ).
i=1
Note that the objective function consists of both the primal variable w ? W and dual variable
? = (?1 , . . . , ?m )? ? ?, where ? ? Rm
+ is a compact convex set that bounds the set of dual
variables and will be discussed later. In the proposed algorithm, we will simultaneously update
solutions for both w and ?. By exploring convex-concave
optimization theory [16], we will show
?
that with a high probability, the solution of regret O( T ) exactly obeyes the constraints.
As the first step, we consider a simple scenario where the obtained solution is allowed to violate the
constraints. The detailed steps of our primal-dual algorithm is presented in Algorithm 1 . It follows
the same procedure as convex-concave optimization. Since at each iteration, we only observed a
randomly sampled loss functions fti (w), i = 0, 1, . . . , m, the objective function given by
m
?
Lt (w, ?) = ft0 (w) +
?i (fti (w) ? ?i )
i=1
?
provides an unbiased estimate of L(w,
?). Given the approximate objective Lt (w, ?), the proposed algorithm tries to minimize the objective Lt (w, ?) with respect to the primal variable w and
maximize the objective with respect to the dual variable ?.
To facilitate the analysis, we first rewrite the the constrained optimization problem
min f?0 (w)
w?B?W
{
}
where W is defined as W = w : f?i (w) ? ?i , i = 1, . . . m in the following equivalent form:
m
?
min maxm f?0 (w) +
?i (f?i (w) ? ?i ).
(2)
w?B ??R+
i=1
?
We denote by w? and ?? = (?1? , . . . , ?m
? ) as the optimal primal and dual solutions to the above
convex-concave optimization problem, respectively, i.e.,
m
?
w? = arg min f?0 (w) +
?i? (f?i (w) ? ?i ),
(3)
w?B
??
=
i=1
m
?
arg max f?0 (w? ) +
??Rm
+
i=1
5
?i (f?i (w? ) ? ?i ).
(4)
The following assumption establishes upper bound on the gradients of L(w, ?) with respect to w
and ?. We later show that this assumption holds under a mild condition on the objective functions.
Assumption 2 (Gradient Boundedness). The gradients ?w L(w, ?) and ?? L(w, ?) are uniformly
bounded, i.e., there exist a constant G > 0 such that
max (?w L(w, ?), ?? L(w, ?)) ? G, for any w ? B and ? ? ?.
Under the preceding assumption, in the following theorem, we show that under appropriate
b T generated by of Algorithm 1 attains a convergence rate of
conditions,
the average solution w
?
O(1/ T ) for both the regret and the violation of the constraints.
b T be the solution obtained
Theorem 1. Set ?i0 ? ?i? + ?, i ? [m], where ? > 0 is a constant. Let w
by Algorithm 1 after T iterations. Then, with a probability 1 ? (2m + 1)?, we have
?(?)
?(?)
b T ) ? f?0 (w? ) ? ?
b T ) ? ?i ? ? , i ? [m]
f?0 (w
and f?i (w
T
?
T
?
?m i 2
2
2
2
where D = i=1 [?0 ] , ? = [ (R + D )/2T ]/G, and
?
? ?
1
2
2
?(?) = 2G R + D + 2G(R + D) 2 ln .
(5)
?
Remark 2. The parameter ? ? R+ is a quantity that may be set to obtain sharper upper bound
on the violation of constraints and may be chosen arbitrarily. In particular, a larger value for ?
imposes larger penalty on the violation of the constraints and results in a smaller violation for the
objectives.
We also can develop an algorithm that allows the solution to exactly satisfy all the constraints. To
? . We will run Algorithm 1 but with ?i replaced by ?
bi . Let G?
this end, we define ?
bi = ?i ? ??(?)
T
denote the upper bound in Assumption 2 for ?? L(w, ?) with ?
bi is replaced by ?i , i ? [m]. The
bT.
following theorem shows the property of the obtained average solution w
b T be the solution obtained by Algorithm 1 with ?i replaced by ?
Theorem 2. Let w
bi and
?i0 = ?i? + ?, i ? [m]. Then, with a probability 1 ? (2m + 1)?, we have
?m
(1 + i=1 ?i0 )?? (?)
0
0
?
?
?
b T ) ? f (w? ) ?
b T ) ? ?i , i ? [m],
f (w
and f?i (w
T
?
where ?? (?) is same as (5) with G is replaced by G? and ? = [ (R2 + D2 )/2T ]/G? .
4.1
Convergence Analysis
Here we provide the proofs of main theorems stated above. We start by proving Theorem 1 and then
extend it to prove Theorem 2.
Proof. (of Theorem 1) Using the standard analysis of convex-concave optimization, from the con?
? ?) with respect to ?, for any w ? B and
vexity of L(w,
?) with respect to w and concavity of L(?,
i
i
? ? [0, ?0 ], i ? [m], we have
? t , ?) ? L(w,
?
L(w
?t )
?
? ?
?
? t , ?t ) ? ?t ? ?, ?? L(w
? t , ?t )
? wt ? w, ?w L(w
= ?wt ? w, ?w Lt (wt , ?t )? ? ??t ? ?, ?? Lt (wt , ?t )?
?
? ?
?
? t , ?t ) ? ?w Lt (wt , ?t ) ? ?t ? ?, ?? L(w
? t , ?t ) ? ?? Lt (wt , ?t )
+ wt ? w, ?w L(w
?
?wt ? w?2 ? ?wt+1 ? w?2
??t ? ??2 ? ??t+1 ? ??2
+
2?
2?
)
?(
+ ??w Lt (wt , ?t )?2 + ??? Lt (wt , ?t )?2
2?
? ?
?
? t , ?t ) ? ?w Lt (wt , ?t ) ? ?t ? ?, ?? L(w
? t , ?t ) ? ?? Lt (wt , ?t ) ,
+ wt ? w, ?w L(w
where in the first inequality we have added and subtracted the stochastic gradients used for updating the solutions, the last inequality follows from the updating rules for wt+1 and ?t+1 and
non-expensiveness property of the orthogonal projection operation onto the convex domain.
6
By adding all the inequalities together, we get
T
?
? t , ?) ? L(w,
?
L(w
?t )
t=1
?
+
T
??
?w ? w1 ?2 + ?? ? ?1 ?2
+
??w Lt (wt , ?t )?2 + ??? Lt (wt , ?t )?2
2?
2 t=1
T
?
?
?
? ?
? t , ?t ) ? ?? Lt (wt , ?t )
? t , ?t ) ? ?w Lt (wt , ?t ) ? ?t ? ?, ?? L(w
wt ? w, ?w L(w
t=1
2
R + D2
+ ?G2 T
2?
T
?
?
? ?
?
? t , ?t ) ? ?w Lt (wt , ?t ) ? ?t ? ?, ?? L(w
? t , ?t ) ? ?? Lt (wt , ?t )
+
wt ? w, ?w L(w
?
t=1
?
R2 + D 2
1
2
?
+ ?G T + 2G(R + D) 2T ln
(w.p. 1 ? ?),
2?
?
where the last inequality follows from the Hoeffiding inequality for Martingales [6]. By expanding
the left hand side, substituting the stated value of ?, and applying the Jensen?s inequality for the
?
b T = ?T ?t /T , for any fixed ?i ? [0, ?i ], i ? [m]
b T = Tt=1 wt /T and ?
average solutions w
0
t=1
and w ? B, with a probability 1 ? ?, we have
m
m
?
?
bi (f?i (w) ? ?i )
bT) +
b T ) ? ?i ) ? f?0 (w) ?
f?0 (w
?i (f?i (w
?
(6)
T
i=1
?
i=1
?
+
2
1
+ 2G(R + D)
ln .
T
T
?
By fixing w = w? and ? = 0 in (6), we have f?i (w? ) ? ?i , i ? [m], and therefore, with a
probability 1 ? ?, have
?
?
?
R2 + D 2
2
1
0
0
?
?
b T ) ? f (w? ) + 2G
f (w
+ 2G(R + D)
ln .
T
T
?
To bound the violation of constraints we set w = w? , ?i = ?i0 , i ? [m], and ?j = ?j? , j ?= i in (6).
We have
m
?
?
bi (f?i (w? ) ? ?i )
b T ) ? ?j ) ? f?0 (w? ) ?
b T ) ? ?i ) +
b T ) + ?i0 (f?i (w
?j? (f?j (w
?
f?0 (w
T
?
? 2G
b T ) + ?i0 (f?i (w
b T ) ? ?i ) +
? f?0 (w
R2
D2
j?=i
i=1
?
m
?
b T ) ? ?j ) ? f?0 (w? ) ?
?j? (f?j (w
?i? (f?i (w? ) ? ?i )
i=1
j?=i
b T ) ? ?i ),
? ?(f?i (w
where the first inequality utilizes (4) and the second inequality utilizes (3).
We thus have, with a probability 1 ?
? ?,
?
?
2G
R2 + D 2
2G(R + D) 2
1
i
?
b T ) ? ?i ?
f (w
+
ln , i ? [m].
?
T
?
T
?
We complete the proof by taking the union bound over all the random events.
We now turn to the proof of Theorem 2 that gives high probability bound on the convergence of the
modified algorithm which obeys all the constraints.
Proof. (of Theorem 2) Following the proof of Theorem 1, with a probability 1 ? ?, we have
m
m
?
?
bi (f?i (w) ? ?
bT) +
bT) ? ?
f?0 (w
?i (f?i (w
bi ) ? f?0 (w) ?
?
bi )
T
i=1
?
? 2G?
?
i=1
R2
+
T
7
D2
?
?
+ 2G (R + D)
1
2
ln
T
?
e ? be the saddle point for the following minimax optimization problem
e ? and ?
Define w
m
?
0
?
min maxm f (w) +
?i (f?i (w) ? ?
bi )
w?B ??R+
i=1
e ? , ?i = ?i0 , and
Following the same analysis as Theorem 1, for each i ? [m], by setting w = w
j
j
j
e? , using the fact that ?
e? ? ?? , we have, with a probability 1 ? ?
?j = ?
?
?
? ? R2 + D 2
2
1 ?(?)
?
i
?
b T ) ? ?i ) ? 2G
+ 2G (R + D)
ln ? ? ? 0,
?(f (w
T
T
?
T
which completes the proof.
4.2
Implementation Issues
In order to run Algorithm 1, we need to estimate the parameter ?i0 , i ? [m], which requires to decide
the set ? by estimating an upper bound for the optimal dual variables ?i? , i ? [m]. To this end, we
consider an alternative problem to the convex-concave optimization problem in (2), i.e.
min max f?0 (w) + ? max (f?i (w) ? ?i ).
w?B ??0
1?i?m
(7)
Evidently w? is the optimal primal solution to (7). Let ?a be the optimal dual solution to the problem
in (7). We have the following proposition that links ?i? , i ? [m], the optimal dual solution to (2),
with ?a , the optimal dual solution to (7).
Proposition 1. Let?
?a be the optimal dual solution to (7) and ?i? , i ? [m] be the optimal solution to
m
(2). We have ?a = i=1 ?i? .
?m
Proof. We can rewrite (7) as min max f?0 (w) + i=1 pi ?(f?i (w) ? ?i ), where domain ?m is
w?B ??0,p??m
?m
= 1}. By redefining ?i = pi ?, we have the problem in (7)
defined as ?m = {? ? Rm
+ :
i=1 ?i ?
m
equivalent to (2) and consequently ? = i=1 ?i as claimed.
Given the result from Proposition 1, it is sufficient to bound ?a . In order to bound ?a , we need to
make certain assumption about f?i (w), i ? [m]. The purpose of introducing this assumption is to
ensure that the optimal dual variable is well bounded from the above.
?m
Assumption 3. We assume min
i=1 ?i ?f?i (w)
? ? , where ? > 0 is a constant.
???m
Equipped with Assumption 3, we are able to bound ?a by ? . To this end, using the first order optimality condition of {
(2) [7], we have ?a = ??f?0}
(w? )?/??g(w)?, where g(w) = max1?i?m f?i (w).
?m
L
i
?
Since ?g(w) ?
i=1 ?i ?f (w) : ? ? ?m , under Assumption 3, we have ?a ? ? . By comL
bining Proposition 1 with the upper bound on ?a , we obtain ?i? ? ? , i ? [m] as desired.
Finally, we note that by having ?? bounded, Assumption 2 is guaranteed by setting G2 =
(
)2
?m
?m
max(L2 1 + i=1 ?i0 , max i=1 (f?i (w) ? ?i )2 ) which follows from Lipschitz continuity of
w?B
the objective functions. In a similar way we can set G? in Theorem 2 by replacing ?i with ?
bi .
5
Conclusions and Open Questions
In this paper we have addressed the problem of stochastic convex optimization with multiple objectives underlying many applications in machine learning. We first examined a simple problem
reduction technique that eliminates the stochastic aspect of constraint functions by approximating
them using the sampled functions from each iteration. We showed that this simple idea fails to attain
the optimal convergence rate and requires to impose a strong assumption, i.e., uniform convergence,
on the objective functions. Then, we
?presented a novel efficient primal-dual algorithm which attains
the optimal convergence rate O(1/ T ) for all the objectives relying only on the Lipschitz continuity of the objective functions. This work leaves few direction for further elaboration. In particular, it
would be interesting to see whether or not making stronger assumptions on the analytical properties
of objective functions such as smoothness or strong convexity may yield improved convergence rate.
Acknowledgments. The authors would like to thank the anonymous reviewers for their helpful and insightful comments. The work of M. Mahdavi and R. Jin was supported in part by ONR Award N000141210431 and
NSF (IIS-1251031).
8
References
[1] F. B. Abdelaziz. Solution approaches for the multiobjective stochastic programming. European
Journal of Operational Research, 216(1):1?16, 2012.
[2] F. B. Abdelaziz, B. Aouni, and R. E. Fayedh. Multi-objective stochastic programming for
portfolio selection. European Journal of Operational Research, 177(3):1811?1823, 2007.
[3] A. Agarwal, P. L. Bartlett, P. D. Ravikumar, and M. J. Wainwright. Information-theoretic
lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions
on Information Theory, 58(5):3235?3249, 2012.
[4] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for
machine learning. In NIPS, pages 451?459, 2011.
[5] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust optimization. Princeton University
Press, 2009.
[6] S. Boucheron, G. Lugosi, and O. Bousquet. Concentration inequalities. In Advanced Lectures
on Machine Learning, pages 208?240, 2003.
[7] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[8] R. Caballero, E. Cerd?a, M. del Mar Mu?noz, and L. Rey. Stochastic approach versus multiobjective approach for obtaining efficient solutions in stochastic multiobjective programming
problems. European Journal of Operational Research, 158(3):633?648, 2004.
[9] M. Ehrgott. Multicriteria optimization. Springer, 2005.
[10] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for
stochastic strongly-convex optimization. Journal of Machine Learning Research - Proceedings
Track, 19:421?436, 2011.
[11] K.-J. Hsiao, K. S. Xu, J. Calder, and A. O. H. III. Multi-criteria anomaly detection using pareto
depth analysis. In NIPS, pages 854?862, 2012.
[12] Y. Jin and B. Sendhoff. Pareto-based multiobjective machine learning: An overview and case
studies. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 38(3):397?415, 2008.
[13] M. Mahdavi, R. Jin, and T. Yang. Trading regret for efficiency: online convex optimization
with long term constraints. JMLR, 13:2465?2490, 2012.
[14] S. Mannor, J. N. Tsitsiklis, and J. Y. Yu. Online learning with sample path constraints. Journal
of Machine Learning Research, 10:569?590, 2009.
[15] H. Markowitz. Portfolio selection. The journal of finance, 7(1):77?91, 1952.
[16] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, Available at
http://www2.isye.gatech.edu/ nemirovs, 1994.
[17] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM J. on Optimization, 19:1574?1609, 2009.
[18] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex
stochastic optimization. In ICML, 2012.
[19] P. Rigollet and X. Tong. Neyman-pearson classification, convexity and stochastic constraints.
The Journal of Machine Learning Research, 12:2831?2855, 2011.
[20] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends
in Machine Learning, 4(2):107?194, 2012.
[21] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization.
In COLT, 2009.
[22] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML, pages 807?814, 2007.
[23] K. Sridharan. Learning from an optimization viewpoint. PhD Thesis, 2012.
[24] K. M. Svore, M. N. Volkovs, and C. J. Burges. Learning to rank with multiple objective
functions. In WWW, pages 367?376. ACM, 2011.
[25] H. Xu and F. Meng. Convergence analysis of sample average approximation methods for a
class of stochastic mathematical programs with equality constraints. Mathematics of Operations Research, 32(3):648?668, 2007.
[26] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
ICML, pages 928?936, 2003.
9
| 4942 |@word mild:1 trial:11 exploitation:1 norm:1 stronger:1 open:2 bining:1 d2:4 seek:1 covariance:1 mention:1 boundedness:1 accommodate:1 reduction:3 existing:1 current:1 com:1 designed:1 update:4 juditsky:1 selected:1 leaf:1 ith:2 provides:2 mannor:1 cse:2 complication:1 lipchitz:1 mathematical:1 direct:1 consists:1 prove:1 combine:2 introduce:1 notably:1 hardness:1 expected:9 examine:3 multi:8 moulines:1 relying:1 equipped:1 solver:2 becomes:1 fti:13 estimating:3 notation:3 provided:2 project:1 bounded:3 underlying:1 kind:1 minimizes:1 developed:1 guarantee:2 every:1 concave:5 tackle:1 finance:1 exactly:3 classifier:1 rm:3 positive:1 before:3 aggregating:1 multiobjective:4 limit:1 meng:1 path:1 hsiao:1 lugosi:1 studied:1 examined:1 challenging:3 nemirovski:3 bi:11 statistically:1 obeys:2 acknowledgment:1 investment:2 regret:6 union:1 n000141210431:1 procedure:1 empirical:1 attain:2 significantly:1 projection:4 boyd:1 get:1 convenience:1 onto:2 close:1 selection:2 pegasos:1 risk:4 impossible:1 applying:1 www:1 zinkevich:1 conventional:1 deterministic:1 lagrangian:1 equivalent:2 maximizing:1 tianbao:1 economics:1 straightforward:1 reviewer:1 convex:29 survey:1 formulate:1 kale:1 rule:1 ftm:1 vandenberghe:1 his:1 proving:1 searching:1 target:3 suppose:1 shamir:2 user:1 anomaly:1 programming:8 hypothesis:1 trick:1 trend:1 satisfying:1 particularly:1 updating:3 labeled:1 observed:1 ft:3 solved:1 ensures:1 trade:1 principled:1 mentioned:1 convexity:2 complexity:2 mu:1 asked:1 flurry:1 motivate:1 solving:1 rewrite:2 max1:1 efficiency:1 learner:2 america:1 choosing:1 pearson:4 shalev:3 heuristic:1 larger:2 solve:3 elaborates:1 transform:1 final:1 online:17 sequence:1 rr:1 evidently:1 analytical:1 propose:1 product:1 remainder:1 realization:1 achieve:1 convergence:20 empty:3 produce:1 ben:1 develop:1 fixing:1 received:2 strong:5 solves:2 trading:1 direction:1 radius:1 drawback:1 closely:1 owing:1 stochastic:63 exploration:1 memorized:1 require:3 preliminary:1 anonymous:1 proposition:4 mathematically:1 adjusted:1 rong:1 frontier:1 ft1:1 strictly:1 hold:4 exploring:1 welfare:1 caballero:1 substituting:1 achieves:1 purpose:1 estimation:1 maxm:2 correctness:1 establishes:1 weighted:1 minimization:1 aim:1 modified:1 ck:1 avoid:1 gatech:1 focus:1 nemirovs:1 rank:1 contrast:2 adversarial:1 attains:4 helpful:1 el:1 i0:11 eliminate:3 entire:1 bt:16 interested:2 arg:3 dual:19 classification:4 issue:2 denoted:1 colt:1 development:1 constrained:6 having:2 sampling:1 yu:1 icml:3 oco:1 minimized:1 np:1 markowitz:1 few:3 randomly:1 simultaneously:2 individual:3 replaced:4 detection:1 introduces:1 violation:8 primal:14 necessary:1 minw:1 orthogonal:1 desired:2 theoretical:1 uncertain:1 instance:1 disadvantage:1 introducing:1 uniform:8 optimally:2 svore:1 obeyed:1 considerably:1 fundamental:2 siam:1 volkovs:1 off:1 together:1 w1:3 thesis:1 satisfied:2 management:1 choose:3 worse:1 stochastically:2 resort:1 return:6 mahdavi:3 bold:1 inc:1 satisfy:3 caused:1 later:2 try:4 lab:2 closed:1 hazan:1 sup:1 start:2 investor:3 minimize:4 variance:3 efficiently:2 yield:2 generalize:1 asset:2 cybernetics:1 randomness:4 simultaneous:1 infinitesimal:1 obvious:1 proof:8 con:1 sampled:6 knowledge:1 organized:1 improved:1 formulation:3 evaluated:1 mar:1 strongly:2 stage:1 hand:1 receives:1 replacing:1 del:1 continuity:4 quality:1 facilitate:1 requiring:1 true:3 unbiased:1 equality:1 assigned:1 boucheron:1 deal:1 criterion:1 generalized:1 outline:1 complete:3 demonstrate:1 theoretic:1 tt:2 consideration:1 novel:1 rigollet:1 overview:1 discussed:2 extend:1 approximates:1 refer:1 cambridge:1 smoothness:1 rd:5 fk:1 mathematics:1 similarly:1 stochasticity:1 portfolio:4 access:3 recent:2 showed:1 optimizing:1 optimizes:1 belongs:1 scenario:2 claimed:1 certain:1 sendhoff:1 inequality:9 binary:1 arbitrarily:1 discussing:1 onr:1 devise:1 captured:1 additional:1 relaxed:1 impose:2 preceding:1 paradigm:1 maximize:1 ii:3 multiple:28 full:2 desirable:1 afterwards:1 violate:1 bach:1 long:4 elaboration:1 concerning:1 ravikumar:1 award:1 iteration:5 agarwal:1 c1:1 receive:1 addressed:1 wealth:2 completes:1 appropriately:1 rest:1 unlike:2 eliminates:1 sure:1 comment:1 subject:5 ascent:1 facilitates:1 sridharan:3 call:1 www2:1 yang:2 presence:2 leverage:1 iii:1 suboptimal:1 reduce:1 inner:1 idea:3 tradeoff:1 scalarization:2 whether:1 utility:1 bartlett:1 penalty:1 rey:1 remark:3 detailed:1 involve:1 transforms:1 amount:1 mahdavim:1 http:1 specifies:1 shapiro:1 exist:1 nsf:1 estimated:6 track:1 threshold:4 lan:1 pondering:1 relaxation:1 convert:1 sum:1 run:2 letter:1 uncertainty:2 fbt:1 throughout:2 decide:3 utilizes:2 circumvents:1 decision:1 acceptable:1 bound:18 guaranteed:2 distinguish:1 followed:1 oracle:3 constraint:41 tal:1 dominated:1 bousquet:1 aspect:3 min:11 optimality:2 ball:1 smaller:1 wi:2 making:2 memorizing:1 ghaoui:1 computationally:1 neyman:4 ln:7 calder:1 ponder:1 discus:1 f1i:1 fail:2 turn:3 singer:1 ehrgott:1 tractable:2 end:6 available:3 generalizes:1 operation:2 unreasonable:1 appropriate:6 fbi:4 subtracted:1 alternative:2 original:2 ensure:4 establish:2 approximating:1 classical:1 unchanged:1 objective:97 question:2 quantity:1 depart:1 added:1 burning:1 concentration:1 dependence:1 mehrdad:1 unclear:1 gradient:13 link:1 thank:1 parametrized:1 toward:1 minimizing:1 setup:3 unfortunately:2 difficult:1 statement:2 tyang:1 sharper:1 negative:4 stated:2 design:1 implementation:1 unknown:4 upper:5 jin:4 descent:5 situation:1 rn:1 introduced:1 cast:2 specified:4 optimized:1 redefining:1 conflicting:4 nip:2 address:3 able:2 beyond:2 below:4 challenge:3 program:1 including:1 max:7 wainwright:1 event:1 difficulty:1 advanced:1 minimax:1 risky:1 naive:3 prior:1 l2:1 determining:1 asymptotic:1 loss:13 lecture:2 interesting:1 limitation:2 srebro:2 versus:1 foundation:1 sufficient:1 consistent:1 imposes:1 viewpoint:2 pareto:5 pi:2 supported:1 last:4 tsitsiklis:1 side:1 burges:1 noz:1 face:1 taking:1 barrier:1 liquidity:1 depth:1 world:1 concavity:1 author:1 made:1 projected:3 cope:1 transaction:2 approximate:6 compact:2 keep:2 dealing:1 decides:1 conclude:1 shwartz:3 alternatively:1 msu:2 continuous:2 additionally:1 learn:1 nature:2 robust:5 expanding:1 rongjin:1 operational:4 obtaining:1 unavailable:1 ft0:8 complex:1 european:3 domain:15 submit:1 main:5 linearly:1 allowed:1 xu:2 martingale:1 tong:1 sub:2 fails:1 deterministically:1 lie:1 isye:1 jmlr:1 minz:1 theorem:14 abdelaziz:2 insightful:1 jensen:1 r2:7 rakhlin:1 svm:1 multicriteria:1 exists:1 false:2 adding:1 ci:1 phd:1 nec:2 budget:1 michigan:2 lt:19 simply:1 saddle:1 g2:2 springer:1 satisfies:2 acm:1 goal:8 consequently:1 lipschitz:7 replace:1 feasible:1 man:1 specifically:3 typical:1 infinite:1 except:1 wt:54 averaging:1 uniformly:1 total:2 latter:2 evaluate:1 princeton:1 handling:1 |
4,357 | 4,943 | Data-driven Distributionally Robust Polynomial
Optimization
Martin Mevissen
IBM Research?Ireland
[email protected]
Emanuele Ragnoli
IBM Research?Ireland
[email protected]
Jia Yuan Yu
IBM Research?Ireland
[email protected]
Abstract
We consider robust optimization for polynomial optimization problems where the
uncertainty set is a set of candidate probability density functions. This set is a ball
around a density function estimated from data samples, i.e., it is data-driven and
random. Polynomial optimization problems are inherently hard due to nonconvex objectives and constraints. However, we show that by employing polynomial
and histogram density estimates, we can introduce robustness with respect to distributional uncertainty sets without making the problem harder. We show that
the optimum to the distributionally robust problem is the limit of a sequence of
tractable semidefinite programming relaxations. We also give finite-sample consistency guarantees for the data-driven uncertainty sets. Finally, we apply our
model and solution method in a water network optimization problem.
1
Introduction
For many optimization problems, the objective and constraint functions are not adequately modeled
by linear or convex functions (e.g., physical phenomena such as fluid or gas flow, energy conservation, etc.). Non-convex polynomial functions are needed to describe the model accurately. The
resulting polynomial optimization problems are hard in general. Another salient feature of realworld problems is uncertainty in the parameters of the problem (e.g., due to measurement errors,
fundamental principles, or incomplete information), and the need for optimal solutions to be robust
against worst case realizations of the uncertainty. Robust optimization and polynomial optimization are already an important topic in machine learning and operations research. In this paper, we
combine the polynomial and uncertain features and consider robust polynomial optimization.
We introduce a new notion of data-driven distributional robustness: the uncertain problem parameter is a probability distribution from which samples can be observed. Consequently, it is natural to
take as the uncertainty set a set of functions, such as a norm ball around an estimated probability distribution. This approach gives solutions that are less conservative than classical robust optimization
with a set for the uncertain parameters. It is easy to see that the set uncertainty setting is an extreme
case of a distributional uncertainty set comprised of a set of Dirac densities. This stands in sharp
contrast with real-world problems where more information is at hand than the support of the distribution of the parameters affected by uncertainty. Uncertain parameters may follow normal, Poisson,
or unknown nonparametric distributions. Such parameters arise in queueing theory, economics, etc.
We employ methods from both machine learning and optimization. First, we take care to estimate
the distribution of the uncertain parameter using polynomial basis functions. This ensures that the
resulting robust optimization problem can be reduced to a polynomial optimization problem. In turn,
we can then employ an iterative method of SDP relaxations to solve it. Using tools from machine
learning, we give a finite-sample consistency guarantee on the estimated uncertainty set. Using tools
from optimization, we give an asymptotic guarantee on the solutions of the SDP relaxations.
1
Section 2 presents the model of data-driven distributionally robust polynomial optimization?DRO
for short. Section 3 situates our work in the context of the literature. Our contributions are the
following. In Section 4, we consider the general case of uncertain multivariate distribution, which
yields a generalized problem of moments for the distributionally robust counterpart. In Section 5,
we introduce an efficient histogram approximation for the case of uncertain univariate distributions,
which yields instead a polynomial optimization problem for the distributionally robust counterpart.
In Section 6, we present an application of our model and solution method in the domain of water
network optimization with real data.
2
Problem statement
Consider the following polynomial optimization problem
min
x?X
h(x, ?),
(1)
where ? ? Rn is an uncertain parameter of the problem. We allow h to be a polynomial in x ? Rm
and X to be a basic closed semialgebraic set. That is, even if ? is fixed, (1) is a hard problem in
general.
In this work, we are interested in distributionally robust optimization (DRO) problems that take the
form
(DRO)
min max
Ef h(x, ?),
x?X f ?D?,N
for all t,
(2)
where x is the decision variable, ? is a random variable distributed according to an unknown probability density function f ? , which is the uncertain parameter in this setting. The expectation Ef is
with respect to a density function f , which belongs to an uncertainty set D?,N . This uncertainty set
itself is a set of possible probability density functions constructed from a given sequence of samples
?1 , . . . , ?N distributed i.i.d. according to the unknown density function f ? of the uncertain parameter
?. We call D?,N a distributional uncertainty set, it is a random set constructed as follows:
D?,N = {f : a prob. density s.t. kf ? fbN k 6 ?},
(3)
where ? > 0 is a given constant, k?k is a norm, and fbN is an density function estimated from the
samples ?1 , . . . , ?N . We describe the construction of the distributional uncertainty set in the cases
of multivariate and univariate samples in Sections 4 and 5.
We say that a robust optimization problem is data-driven when the uncertainty set is an element of
a sequence of uncertainty sets D?,1 ? D?,2 ? . . ., where the index N represents the number of
samples of ? observed by the decision-maker. This definition allows us to completely separate the
problem of robust optimization from that of constructing the appropriate uncertainty set D?,N . The
underlying assumption is that the uncertainty set (due to finite-sample estimation of the parameter
?) adapts continuously to the data as the sample size N increases. By considering data-driven
problems, we are essentially employing tools from statistical learning theory to derive consistency
guarantees.
Let R[x] denote the vector space of real-valued, multivariate polynomials, i.e., every g ? R[x] is a
function g : Rm ? R such that
X
X
?m
m
1
g(x) =
g? x? =
g? x?
1 . . . xm , ? ? N ,
|?|6d
|?|6d
where {g? } is a set of real numbers. A polynomial optimization problem (POP) is given by
min q(x),
x?K
(4)
where K = {x ? Rd | g1 (x) > 0, . . . , gm (x) > 0}, q ? R[x], and gj ? R[x] for j = 1, . . . , m.
One of our key results arises from the observation that the distributional robust counterpart of a
POP is a POP as well. A set K defined by a finite number of multivariate polynomial inequality
constraints is called a basic closed semialgebraic set. As shown in [1], if the basic closed semialgebraic set K compact and archimedian, there is a hierarchy of SDP relaxations whose minima
2
converge to the minimum of (4) for increasing order of the relaxation. Moreover, if (4) has an unique
minimal solution x? , then the optimal solution y?? of the ? -th order SDP relaxation converges to x?
as ? ? ?.
Our work combines robust optimization with notions from statistical machine learning, such as density estimation and consistency. Our data-driven robust polynomial optimization method applies to
a number of machine learning problems. One example arises in Markov decision problems where a
high-dimensional value-function is approximated by a low-dimensional polynomial V . A distributionally robust variant of value iteration can be cast as:
X
max min Ef {r(x, a, ?) + ?
P (x0 | x, a, ?)V (x0 )},
a?A f ?D?,N
x0 ?X
where ? is a random parameter with unknown distribution and the uncertainty set D?,N of possible
distribution is constructed by estimation. We present next two further examples.
Example 2.1 (Distributionally robust ridge regression). We are given an i.i.d. sequence of
observation-label samples {(?i , yi ) ? Rn?1 ? R : i = 1, . . . , N } from an unknown distribution
f ? , where each observation ?i has an associated label yi ? R. Ridge regression minimizes the
empirical residual with `2 -regularization and uses the samples to construct residual function. The
distributionally robust version of ridge regression is a conceptually different approach: it uses the
samples to construct a random uncertainty set D?,N to estimate the distribution f ? and can be formulated as
min max
u?Rn f ?D?,N
Ef (yN +1 ? ?N +1 ? u)2 + ?(u ? u),
where D?,N is the uncertainty set of possible densities constructed from the N samples. Our solution
methods can even be applied to regression problems with nonconvex loss and penalty functions.
Example 2.2 (Robust investment). Optimization problems of the form of (2) arise in problems that
involve monetary measures of risk in finance [2]. For instance, the problem of robust investment in
a vector of (random) financial positions ? ? Rn is
minn sup ?EQ U (v ? ?) ,
v?? Q?Q
where Q denotes a set of probability distributions, U is a utility function, and v ? ? is an allocation
among financial positions. If U is polynomial, then the robust utility functional is a special case of
DRO.
3
Our contribution in context
To situate our work within the literature, it is important to note that we consider distributional
uncertainty sets and polynomial constraints and objectives. In this section, we outline related works
with different and similar uncertainty sets, constraints and objectives.
Robust optimization problems of the form of (2) have been studied in the literature with different
uncertain sets. In several works, the uncertainty sets are defined in terms of moment constraints
[3, 4, 5]. Moment based uncertainty sets are motivated by the fact that probabilistic constraints can
be replaced by constraints on the first and second moments in some cases [6].
In contrast, we do not consider moment constraints, but distributional uncertainty sets based on
probability density functions with the Lp -norm as the metric. One reason for our approach is that
higher moments are difficult to estimate [7]. In contrast, probability density functions can be readily
estimated using a variety of data-driven methods, e.g., empirical histograms, kernel-based [8, 9], and
orthogonal basis [10] estimates. Uncertainty sets defined by distribution-based constraints appear
also in problems of risk measures [11]. For example uncertainty sets defined using Kantorovich
distance are considered in [5, Section 4] and [11] while [5, Section 3] and [12] consider distributional
uncertainty with both measure bounds (of the form ?1 6 ? 6 ?2 ) and moment constraints.
[13] considers distributional uncertainty sets with a ??divergence metric. A notion of distributional
uncertainty set has also been studied in the setting of Markov decision problems [14]. However, in
those works, the uncertainty set is not data-driven.
3
Robust optimization formulations for polynomial optimization problems have been studied in [1, 15]
with deterministic uncertainty sets (i.e., neither distributional, nor data-driven). A contribution is to
show how to transform distributionally robust counterparts of polynomial optimization problems
into polynomial optimization problems. In order to solve these POP, we take advantage of the hierarchy of SDP relaxations from [1]. Another contribution of this work is to use sampled information
to construct distributional uncertainty sets more suitable for problems where more and more data is
collected over time.
4
Multivariate uncertainty around polynomial density estimate
In this section, we construct a data-driven uncertainty set in the L2 -space?with the uniform norm
k?k2 . Furthermore we assume, the support of ? is contained in some basic closed semialgebraic set
S := {z ? Rn | sj (z) > 0, j = 1, . . . , r}, where sj ? R[z].
In order to construct a data-driven distributional uncertainty set, we need to estimate the density f ?
of the parameter ?. Various density estimation approaches exist?e.g., kernel-density and histogram
estimation. Some of these give rise to a computational problem due to the curse of dimensionality. However, to ensure that the resulting robust optimization problem remains an polynomial
optimization problem, we define the empirical density estimate fbN as a multivariate polynomial (cf.
Section 2).
Let {?k } denote univariate Legendre polynomials:
r
2k + 1 1 dk 2
?k (a) =
(a ? 1)k ,
2 2k k! dak
a ? R, k = 0, 1, . . .
Let ? ? Nn , z ? Rn , and ?? (z) = ??1 (z1 ) . . . ??n (zn ) denote the multivariate Legendre
polynomial. In this section, we employ the following Legendre series density estimator [10]:
PN
P
fbN (z) = |?|6d N1 j=1 ?? (?j )?? (z).
In turn, we define the following uncertainty set:
Z
b
Dd,,N = f ? R[z]d |
f (z) d z = 1,
f ? fN
6 .
2
S
where R[z]d denotes the vector space of polynomials in R[z] of degree at most d. Observe that
the polynomials in Dd,,N are not required to be non-negative on S. However, the non-negativity
constraint on S can be added at the expense of making the resulting DRO problem for a POP a
generalized problem of moments.
4.1
Solving the DRO
Next, we present asymptotic guarantees for solving distributionally robust polynomial optimization
through SDP relaxations. This result rests on the following assumptions, which are detailed in [1].
Assumption 4.1. The sets X = {x ? Rm | kj (z) > 0, j = 1, . . . , t} and S = {z ? Rn | sj (z) >
Pt
0, j = 1, . . . , r} are compact. There exist u ? R[x] and v ? R[z] such that u = u0 + j=1 uj kj
Pr
and v = v0 + j=1 vj sj for some sum-of-squares polynomials {uj }tj=0 , {vj }rj=0 , and the level
sets {x | u(x) > 0} and {z | v(z) > 0} compact.
Note that sets X and S satisfying Assumption 4.1 are called archimedian. This assumption is not
much more restrictive than compactness, e.g., if S := {z ? Rn | sj (z) > 0, j = 1, . . . , r} is
compact, then there exists a L2 -ball of radius R that contains S. Thus, S = S? = {z ? Rn | sj (z) >
Pn
0, j = 1, . . . , r, i=1 zi2 6 R}. With Theorem 1 in [22] it follows that S? satisfies Assumption 4.1.
Theorem 4.1. Suppose that Assumption 4.1 holds. Let h ? R[x, z], fbN ? R[z], and let X and S be
basic closed semialgebraic sets. Let V ? ? R denote the optimum of problem
Z
min max
h(x, z)f (z)dz.
(5)
x?X f ?Dd,?,N
S
4
(i) Then, there exists a sequence of SDP relaxations SDPr such that min SDPr % V ? for
r ? ?.
(ii) If (5) has a unique minimizer x? , and mr the sequence of subvectors of optimal solutions
of SDPr associated with the first order moments of monomials in x only. Then, mr ? x?
componentwise for r ? ?.
All proofs appear in the appendix of the supplementary material.
4.2
Consistency of the uncertainty set
In this section, we show that the uncertainty set that we constructed is consistent. In other words,
given constants and ?, we give number of samples N needed to ensure that the closest polynomial
to the unknown density f ? belongs to the uncertainty set Dd,?,N with probability 1 ? ?.
R
Theorem 4.2 ([10, Section 3]). Let c? denote the coefficients c? = ?? f ? for all values of the
multi-index
?. Suppose that the density function f ? is square-integrable. We have Ekf ? ? fbN k22 6
P
CH ?:|?|6d min(1/N, c2? ), where CH is a constant that depends only on f ? .
As a corollary of Theorem 4.2, we obtain the following.
Corollary 4.3. Suppose
that the assumptions of Theorem 4.2 hold. Let gd? denote the polynomial
P
?
function gd (x) = ?:|?|6d c? x? . There exists a function1 ? such that ?(d) & 0 as d ? ? and
such that
P
CH ?:|?|6d min(1/N, c2? ) + ?2 (d)
?
P(gd ? Dd,?,N ) > 1 ?
,
(? ? ?(d))2
for ? > ?(d).
P
Remark 1. Observe that since ?:|?|6d min(1/N, c2? ) 6 n+d
d /N = (n + d)!/(N d! n!), by an
appropriate choice of N , it is possible to guarantee that the right-hand side tends to zero, even as
d ? ?.
5
Univariate uncertainty around histogram density estimate
In this section, we describe an additional layer of approximation for the univariate uncertainty setting. In contrast to Section 4, by approximating the uncertainty set D?,N by a set of histogram
density functions, we reduce the DRO problem to a polynomial optimization problem of degree
identical with the original problem. Moreover, we derive finite-sample consistency guarantees. We
assume that samples ?1 , . . . , ?N are given for the uncertain parameter ?, which takes values in a
given interval [A, B] ? R. I.e., in contrast to the previous section, we assume that the uncertain
parameter takes values in a bounded interval. We partition R into K-intervals u0 , . . . , uK?1 , such
that |uk | = |B ? A| /K for all k = 0, . . . , K ? 1. Let m0 , . . . , mK?1 denote the midpoints of the
respective intervals. We define the empirical density vector pbN,K :
pbN,K (k) =
N
1 X
1[? ?u ]
N i=1 i k
for all k = 0, . . . , K ? 1.
Recall that the L? -norm of a function G : X ? Rn is: kGk? = supx?X |G(x)| . In this section,
we approximate the uncertainty set D?,N by a subset of the simplex in RK :
W?,N = p ? ?K : kp ? pbN,K k? 6 ? ,
where p = (p1 , . . . , pK ) denote a vector in RK . In turn, this will allow us to approximate the DRO
problem (2) by the following:
(ADRO) :
1
min max
x?X p?W?,N
K?1
X
h (x, mk ) pk .
(6)
k=0
The function ?(d) quantifies the error due to estimation with in a basis of polynomials with finite degree
d.
5
5.1
Solving the DRO
The following result is an analogue of Theorem 4.1.
Theorem 5.1. Suppose that Assumption 4.1 holds. Let h ? R[x, z], and let X be basic closed
semialgebraic2 . Let W ? ? R denote the optimum of problem
min max
K?1
X
x?X p?W?,N
h (x, mk ) pk .
(7)
k=0
(i) Then, there exists a sequence of SDP relaxations SDPr such that min SDPr % W ? for
r ? ?.
(ii) If (7) has a unique minimizer x? , let mr the sequence of subvectors of optimal solutions of
SDPr associated with the first order moments of the monomials in x only. Then, mr ? x?
componentwise for r ? ?.
5.2
Approximation error
Next, we bound the error of approximating D?,N with W?,N . This error depends only on the ?degree? K of the histogram approximation.
Theorem 5.2. Suppose that the support of ? is the interval [A, B]. Suppose that |h(x, z)| 6 H
? , sup{f 00 (z) : f ? D?,N , z ? [A, B]} be finite. Let
for all x ? X and z ? [A, B]. Let M
gx (z) , h(x, z)f (z) and let M , sup{gx0 (z) : f ? D?,N , z ? [A, B]} be finite. For every
? 6 K?/(B ? A) and density function f ? D?,N , we have a density vector p ? W?,N such that
Z
K?1
X
? )(B ? A)3 /(24K 2 ).
h(x, z) f (z)dz ?
h (x, mk ) pk 6 (M + H M
z?[A,B]
k=0
5.3
Consistency of the uncertainty set
Given ? and ?, we consider in this section the number of samples N that we need to ensure that
the unknown probability density is in the uncertainty set D?,N with probability 1 ? ?. The consistency guarantee for the univariate histogram uncertainty set follows as a corollary of the following
univariate Dvoretzky-Kiefer-Wolfowitz Inequality.
Theorem 5.3 ([16]). Let FbN,k denote the distribution function associated with the probabilities
pbN,K , and F ? the distribution function associated with the density function f ? . If F ? is continuous,
then P(kF ? ? FbN,K k? > ?) 6 2 exp(?2?2 /N ).
Corollary 5.4. Let p? denotes the histogram density vector of ? induced by the true density f ? . As
N ? ?, we have P(p? ? W?,N ) > 1 ? 2 exp(?2?2 /N ).
Remark 2. Provided that the density f ? is Lipchitz continuous, it follows that the optimal value of
(A1) converges to the optimal value without uncertainty as the size ? of the uncertainty set tend to
zero and the number of sample N tends to infinity.
6
Application to water network optimization
In this section, we consider a problem of optimal operation of a water distribution network (WDN).
Let G = (V, E) denote a graph, i.e., V is the set of nodes and E the set of pipes connecting the
nodes in a WDN. Let wi denote the pressure, ei the elevation, and ? i the demand at node i ? V , qi,j
the flow from i to j, and `i,j the loss caused by friction in case of flow from i to j for (i, j) ? E.
Our objective is to minimize the overall pressure at selected critical points V1 ? V in the WDN
by optimally setting a number of pressure reducing valves (PRVs) located on certain pipes in the
network while adhering to the conservations laws for flow and pressure:
min
h(w, q, ?),
(w,q)?X
2
where
Since S is an interval, the assumption is trivially satisfied for S.
6
(8)
X
h(w, q, ?) :=
wi + ?
X
?j ?
j?V
i?V1
X
qk,j +
k6=j
X
qj,l
2
,
l6=j
X := {(w, q) ? R|N |+2|E| | wmin 6 wi 6 wmax ,
qmin 6 qi,j 6 qmax ,
qi,j (wj + ej ? wi ? ei + `i,j (qi,j )) 6 0,
wj + ej ? wi ? ei + `i,j (qi,j ) > 0, ?(i, j)}.
We assume that `i,j is a quadratic function in qi,j . The PRV sets the pressure wi at the node i. The
derivation of (8) and a detailed description of the problem appear in [17]. Thus, h ? R[w, q, ?] and
X is a basic closed semialgebraic set. For a fixed vector of demands ? = (? 1 , . . . , ? |V | ), (8) falls
into the class (1). In real-world water networks, the demand ? is uncertain. Given are ranges for the
possible realization of nodal demands, i.e., the support of ? is given by S := {?
z ? R|N | | zimin 6
max
z?i 6 zi }. Moreover, we assume that samples ?1 , . . . , ?N of ? are given and that they corresponds
to sensors measurements. Therefore, the distributionally robust counterpart of (8) is of the form of
ADRO (6).
1
1
2
9
3
10
4
30
5
31
50
23
32
33
34
27
24
23
6
3
9
35
30
2212
7
11
14
25
28
21
15
20
4
16
8
40
35
26
24
11
13
25
7
37
8
25
10
5
45
36
6
2
12
13
17
19
16
14
18
19
17
20
15
20
15
18
21
29
10
22
0
5
10
15
20
25
Figure 1: (a) 25 node network with PRVs on pipes 1, 5 and 11. (b) Scatter plot of demand at node
15 over four months overlaid over the 24 hours of a day.
We consider the benchmark WDN with |V | = 25 and |E| = 37 of [18], which is illustrated in
Figure 1 (a). We assign demand values at the nodes of this WDN according to real data collected in
an anonymous major city. In our experiment we assume the demands at all nodes, except at node
15, are fixed; for node 15 N = 120 samples of daily demands were collected over four months?the
dataset is shown in Figure 1 (b). Node 15 has been selected because it is one of the largest consumers
and has a demand profile with the largest variation.
First, we consider the uncertainty set W?,N constructed from a histogram estimation with K = 5
bins.
consider, (a) the deterministic problem (8) with three values ?min := mini ?i15 , ?? :=
P We
1
15
and ?max := maxi ?i15 as the demand at node 15, (b) the distributionally robust
i ?i
N
counterpart (A1) with = 0.2 and ? = 1, and (c) the classical robust formulation of (8)
with an uncertainty range [?min , ?max ] without any distributional assumption, i.e., the problem
min(w,q)?X max?15 ?[?min , ?max ] h(w, q, ? 15 ) which is equivalent to
min max h(w, q, ?min ) , h(w, q, ?max )
(9)
(w,q)?X
2
P
P
since ? 15 ? k6=15 qk,15 + l6=15 q15,l in (8) is convex quadratic in ? 15 attains its maximum at
the boundary of [?min , ?max ]. We solve (9) by solving the two polynomial optimization problems.
All three cases (a)?(c), are polynomial optimization problems which we solve by first applying the
sparse SDP relaxation of first order [19] with SDPA [20] as the SDP solver, and then applying
IPOPT [21] with the SparsePOP solution as starting point. Computations on single blade server
with 100GB (total, 80 GB free) of RAM and a processor speed of 3.5GHz. Total computation time
is denoted as tC .
7
? 15
?min
??
?max
tC
738
868
624
optimal setting
(15.0, 15.7, 15.9)
(15.0, 15.5, 15.6)
(15.0, 15.4, 15.5)
P
i?V1
wi
46.7
46.1
45.9
Table 1: Results for non-robust case (a).
Problem
DRO (b)
RO (c)
tC
1315
1460
optimal setting
(15.0, 15.5, 15.7)
(15.0, 16.9, 17.3)
objective
6.62 ? 105
1.54 ? 106
P
wi
46.2
49.2
Table 2: Results for DRO case (b) and classical robust case (c).
The
P results for the deterministic case (a) show that the optimal setting and the overall pressure sum
i?V1 wi differ even when the demand at only one node changes, as reported in Table 1.
Comparing the distributionally robust (b) and robust (c) optimal solution for the optimal PRV setting
problem, we observe, that the objective value of the distributionally robust counterpart is substantially smaller than the robust one. Thus, the distributionally robust solution is less conservative than
the robust solution. Moreover, the distributionally robust setting is very close to the average case
deterministic solution ?? - but it does not coincide. It seems to hedge the solution against the worst
case realization for the demand, given by the scenario ? = ?min , which results in the highest pressure
profile. Moreover, note that solving the distributionally robust (and robust ) counterpart requires the
same order of magnitude in computational time as the deterministic problem. That may be due to the
fact that both the deterministic and the robust problems are hard polynomial optimization problems.
7
Discussion
We introduced a notion of distributional robustness for polynomial optimization problems. The
distributional uncertainty sets based on statistical estimates for the probability density functions
have the advantage that they are data-driven and consistent with the data for increasing samplesize. Moreover, they give solutions that are less conservative than classical robust optimization
with valued-based uncertainty sets. We have shown that these distributional robust counterparts of
polynomial optimization problems remain in the same class problems from the perspective of computational complexity. This methodology is promising for a numerous real-world decision problems,
where one faces the combined challenge of hard, non-convex models and uncertainty in the input
parameters.
We can extend the histogram method of Section 5 to the case of multivariate uncertainty, but it is
well-known that the sample-complexity of histogram density-estimation is greater than polynomial
density-estimation. An alternative definition of the distributional uncertainty set D?,N is to allow
functions that are not proper density functions by removing some constraints; this gives a trade-off
between reduced computational complexity and more conservative solutions.
The solution method of SDP relaxations comes without any finite-time guarantees. Although such
guarantees are hard to come by in general, an open problem is to identify special cases that give
insight into the rate of convergence of this method.
Acknowledgments
J. Y. Yu was supported in part by the EU FP7 project INSIGHT under grant 318225.
References
[1] J. B. Lasserre. A semidefinite programming approach to the generalized problem of moments.
Math. Programming, 112:65?92, 2008.
8
[2] A. Schied. Optimal investments for robust utility functionals in complete market models. Math.
Oper. Research, 30(3):750?764, 2005.
[3] E. Delage and Y. Ye. Distributionally robust optimization under moment uncertainty with
applications to data-driven problems. Operations Research, 2009.
[4] D. Bertsimas, X. V. Doan, K. Natarajan, and C.-P. Teo. Models for minimax stochastic linear
optimization problems with risk aversion. Math. Oper. Res., 35(3):580?602, 2010.
[5] S. Mehrotra and H. Zhang. Models and algorithms for distributionally robust least squares
problems. Preprint, 2011.
[6] D. Bertsimas and I. Popescu. Optimal inequalities in probability theory: a convex optimization
approach. SIAM J. Optimization, 15:780?804, 2000.
[7] P. R. Halmos. The theory of unbiased estimation. The Annals of Mathematical Statistics,
17(1):34?43, 1946.
[8] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC,
1998.
[9] L. Devroye and L. Gy?rfi. Nonparametric Density Estimation. Wiley, 1985.
[10] P. Hall. On the rate of convergence of orthogonal series density estimators. Journal of the
Royal Statistical Society. Series B, 48(1):115?122, 1986.
[11] G. Pflug and D. Wozabal. Ambiguity in portfolio selection. Quantitative Finance, 7(4):435?
442, 2007.
[12] A. Shapiro and S. Ahmed. On a class of minimax stochastic programs. SIAM J. Optim.,
14(4):1237?1249, 2004.
[13] A. Ben-Tal, D. den Hertog, A. de Waegenaere, B. Melenerg, and G. Rennen. Robust solutions
of optimization problems affected by uncertain probabilities. Management Science, 2012.
[14] H. Xu and S. Mannor. Distributionally robust markov decision processes. Mathematics of
Operations Research, 37(2):288?300, 2012.
[15] R. Laraki and J. B. Lasserre. Semidefinite programming for min-max problems and games.
Math. Programming A, 131:305?332, 2010.
[16] P. Massart. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Annals of Probability, 18(3):1269?1283, 1990.
[17] B. J. Eck and M. Mevissen. Valve placement in water networks. Technical report, IBM Research, 2012. Report No. RC25307 (IRE1209-014).
[18] A. Sterling and A. Bargiela. Leakage reduction by optimised control of valves in water networks. Transactions of the Institute of Measurement and Control, 6(6):293?298, 1984.
[19] H. Waki, S. Kim, M. Kojima, M. Muramatsu, and H. Sugimoto. SparsePOP: a sparse semidefinite programming relaxation of polynomial optimization problems. ACM Transactions on
Mathematical Software, 35(2), 2008.
[20] M. Yamashita, K. Fujisawa, K. Nakata, M. Nakata, M. Fukuda, K. Kobayashi, and K. Goto.
A high-performance software package for semidefinite programs: SDPA 7. Technical report,
Tokyo Institute of Technology, 2010.
[21] A. Waechter and L. T. Biegler. On the implementation of a primal-dual interior point filter
line search algorithm for large-scale nonlinear programming. Mathematical Programming,
106(1):25?57, 2006.
[22] M. Schweighofer. Optimization of polynomials on compact semialgebraic sets. SIAM J. Optimization, 15:805?825, 2005.
9
| 4943 |@word kgk:1 version:1 polynomial:46 norm:5 seems:1 open:1 pressure:7 harder:1 blade:1 reduction:1 moment:12 series:3 contains:1 com:2 comparing:1 optim:1 scatter:1 readily:1 fn:1 partition:1 plot:1 selected:2 short:1 math:4 node:13 mannor:1 gx:1 lipchitz:1 zhang:1 mathematical:3 nodal:1 constructed:6 c2:3 yuan:1 combine:2 introduce:3 x0:3 market:1 p1:1 nor:1 sdp:11 multi:1 eck:1 curse:1 valve:3 considering:1 increasing:2 subvectors:2 provided:1 solver:1 underlying:1 moreover:6 bounded:1 qmin:1 project:1 minimizes:1 substantially:1 guarantee:10 quantitative:1 every:2 finance:2 ro:1 rm:3 k2:1 uk:2 control:2 grant:1 yn:1 appear:3 kobayashi:1 tends:2 limit:1 optimised:1 kojima:1 studied:3 range:2 unique:3 acknowledgment:1 investment:3 silverman:1 delage:1 empirical:4 word:1 close:1 selection:1 interior:1 context:2 risk:3 applying:2 equivalent:1 deterministic:6 dz:2 economics:1 starting:1 convex:5 adhering:1 estimator:2 insight:2 financial:2 notion:4 variation:1 annals:2 hierarchy:2 gm:1 construction:1 pt:1 suppose:6 programming:8 us:2 element:1 approximated:1 satisfying:1 located:1 natarajan:1 distributional:19 observed:2 preprint:1 worst:2 wj:2 ensures:1 eu:1 trade:1 highest:1 complexity:3 solving:5 tight:1 basis:3 completely:1 various:1 derivation:1 describe:3 kp:1 whose:1 supplementary:1 solve:4 valued:2 say:1 statistic:2 g1:1 transform:1 itself:1 sequence:8 advantage:2 realization:3 monetary:1 adapts:1 description:1 dirac:1 convergence:2 optimum:3 converges:2 ben:1 hertog:1 derive:2 gx0:1 eq:1 come:2 differ:1 radius:1 tokyo:1 filter:1 stochastic:2 sdpa:2 material:1 bin:1 crc:1 assign:1 pbn:4 anonymous:1 elevation:1 hold:3 around:4 considered:1 hall:2 normal:1 exp:2 overlaid:1 m0:1 major:1 estimation:12 label:2 maker:1 teo:1 largest:2 city:1 tool:3 sensor:1 ekf:1 pn:2 ej:2 corollary:4 contrast:5 attains:1 kim:1 nn:1 compactness:1 archimedian:2 interested:1 overall:2 among:1 dual:1 denoted:1 k6:2 special:2 construct:5 chapman:1 identical:1 represents:1 yu:2 simplex:1 report:3 employ:3 divergence:1 replaced:1 sterling:1 n1:1 wdn:5 yamashita:1 extreme:1 semidefinite:5 primal:1 tj:1 daily:1 respective:1 pflug:1 orthogonal:2 incomplete:1 re:1 minimal:1 uncertain:15 mk:4 instance:1 zn:1 subset:1 monomials:2 uniform:1 comprised:1 optimally:1 reported:1 supx:1 gd:3 combined:1 density:39 fundamental:1 siam:3 ie:2 probabilistic:1 off:1 connecting:1 continuously:1 fbn:8 ambiguity:1 satisfied:1 management:1 oper:2 de:1 gy:1 coefficient:1 caused:1 depends:2 closed:7 sup:3 jia:1 contribution:4 minimize:1 square:3 kiefer:2 qk:2 prvs:2 yield:2 identify:1 conceptually:1 accurately:1 processor:1 definition:2 against:2 energy:1 associated:5 proof:1 sampled:1 dataset:1 recall:1 dimensionality:1 dvoretzky:2 higher:1 day:1 follow:1 methodology:1 formulation:2 furthermore:1 hand:2 ei:3 wmax:1 nonlinear:1 k22:1 ye:1 true:1 unbiased:1 counterpart:9 adequately:1 regularization:1 illustrated:1 game:1 generalized:3 outline:1 ridge:3 complete:1 ef:4 nakata:2 functional:1 physical:1 function1:1 extend:1 muramatsu:1 measurement:3 rd:1 consistency:8 trivially:1 mathematics:1 emanuele:1 portfolio:1 gj:1 etc:2 v0:1 multivariate:8 closest:1 perspective:1 belongs:2 driven:15 scenario:1 certain:1 nonconvex:2 server:1 inequality:4 yi:2 integrable:1 minimum:2 additional:1 care:1 greater:1 mr:4 converge:1 wolfowitz:2 u0:2 ii:2 rj:1 technical:2 ahmed:1 jy:1 a1:2 qi:6 variant:1 basic:7 regression:4 essentially:1 expectation:1 poisson:1 metric:2 fujisawa:1 histogram:12 iteration:1 kernel:2 interval:6 rest:1 massart:1 induced:1 tend:1 goto:1 flow:4 call:1 easy:1 variety:1 zi:1 reduce:1 qj:1 motivated:1 utility:3 gb:2 ipopt:1 penalty:1 remark:2 rfi:1 detailed:2 involve:1 nonparametric:2 reduced:2 shapiro:1 exist:2 estimated:5 dak:1 affected:2 key:1 salient:1 four:2 queueing:1 neither:1 v1:4 ram:1 graph:1 relaxation:13 bertsimas:2 sum:2 realworld:1 prob:1 package:1 uncertainty:58 qmax:1 decision:6 appendix:1 bound:2 layer:1 quadratic:2 waegenaere:1 placement:1 constraint:13 infinity:1 prv:2 software:2 tal:1 speed:1 friction:1 min:24 martin:1 according:3 ball:3 legendre:3 smaller:1 remain:1 wi:9 lp:1 making:2 den:1 pr:1 remains:1 turn:3 needed:2 tractable:1 fp7:1 operation:4 apply:1 observe:3 appropriate:2 zimin:1 zi2:1 rennen:1 alternative:1 robustness:3 original:1 denotes:3 ensure:3 cf:1 fukuda:1 l6:2 schied:1 restrictive:1 uj:2 approximating:2 classical:4 society:1 leakage:1 objective:7 already:1 added:1 kantorovich:1 ireland:3 distance:1 separate:1 topic:1 considers:1 collected:3 water:7 reason:1 consumer:1 devroye:1 modeled:1 index:2 minn:1 mini:1 difficult:1 statement:1 expense:1 negative:1 fluid:1 rise:1 implementation:1 proper:1 unknown:7 observation:3 markov:3 benchmark:1 finite:9 gas:1 rn:10 sharp:1 introduced:1 cast:1 required:1 pipe:3 z1:1 componentwise:2 pop:5 hour:1 xm:1 challenge:1 program:2 max:16 royal:1 analogue:1 suitable:1 critical:1 natural:1 residual:2 minimax:2 technology:1 numerous:1 popescu:1 mehrotra:1 negativity:1 kj:2 literature:3 l2:2 kf:2 asymptotic:2 law:1 loss:2 allocation:1 semialgebraic:7 aversion:1 degree:4 consistent:2 doan:1 principle:1 dd:5 ibm:6 supported:1 free:1 side:1 allow:3 institute:2 fall:1 face:1 wmin:1 midpoint:1 sparse:2 distributed:2 ghz:1 boundary:1 stand:1 world:3 coincide:1 situate:1 employing:2 transaction:2 functionals:1 sj:6 approximate:2 compact:5 conservation:2 biegler:1 continuous:2 iterative:1 search:1 quantifies:1 table:3 lasserre:2 promising:1 robust:50 ca:1 inherently:1 constructing:1 domain:1 vj:2 pk:4 arise:2 profile:2 xu:1 wiley:1 position:2 candidate:1 theorem:9 rk:2 removing:1 maxi:1 dk:1 exists:4 magnitude:1 halmos:1 demand:12 tc:3 univariate:7 contained:1 applies:1 ch:3 corresponds:1 minimizer:2 satisfies:1 acm:1 hedge:1 month:2 formulated:1 consequently:1 hard:6 change:1 except:1 reducing:1 conservative:4 called:2 total:2 distributionally:21 support:4 arises:2 phenomenon:1 |
4,358 | 4,944 | Multiscale Dictionary Learning for
Estimating Conditional Distributions
Francesca Petralia
Department of Genetics and Genomic Sciences
Icahn School of Medicine at Mt Sinai
New York, NY 10128, U.S.A.
[email protected]
Joshua Vogelstein
Child Mind Institute
Department of Statistical Science
Duke University
Durham, North Carolina 27708, U.S.A.
[email protected]
David B. Dunson
Department of Statistical Science
Duke University
Durham, North Carolina 27708, U.S.A.
[email protected]
Abstract
Nonparametric estimation of the conditional distribution of a response given highdimensional features is a challenging problem. It is important to allow not only the
mean but also the variance and shape of the response density to change flexibly
with features, which are massive-dimensional. We propose a multiscale dictionary learning model, which expresses the conditional response density as a convex
combination of dictionary densities, with the densities used and their weights dependent on the path through a tree decomposition of the feature space. A fast graph
partitioning algorithm is applied to obtain the tree decomposition, with Bayesian
methods then used to adaptively prune and average over different sub-trees in a
soft probabilistic manner. The algorithm scales efficiently to approximately one
million features. State of the art predictive performance is demonstrated for toy
examples and two neuroscience applications including up to a million features.
1
Introduction
Massive datasets are becoming an ubiquitous by-product of modern scientific and industrial applications. These data present statistical and computational challenges because many previously
developed analysis approaches do not scale-up sufficiently. Challenges arise because of the ultra
high-dimensionality and relatively low sample size. Parsimonious models for such big data assume
that the density in the ambient space concentrates around a lower-dimensional (possibly nonlinear)
subspace. A plethora of methods are emerging to estimate such lower-dimensional subspaces [1, 2].
We are interested in using such lower-dimensional embeddings to obtain estimates of the conditional
distribution of some target variable(s). This conditional density estimation setting arises in a number
of important application areas, including neuroscience, genetics, and video processing. For example, one might desire automated estimation of a predictive density for a neurologic phenotype of
interest, such as intelligence, on the basis of available data for a patient including neuroimaging. The
challenge is to estimate the probability density function of the phenotype nonparametrically based
on a 106 dimensional image of the subject?s brain. It is crucial to avoid parametric assumptions
on the density, such as Gaussianity, while allowing the density to change flexibly with predictors.
Otherwise, one can obtain misleading predictions and poorly characterize predictive uncertainty.
1
There is a rich machine learning and statistical literature on conditional density estimation of a response y ? Y given a set of features (predictors) x = (x1 , x2 , . . . , xp )T ? X ? Rp . Common
approaches include hierarchical mixtures of experts [3, 4], kernel methods [5, 6, 7], Bayesian finite
mixture models [8, 9, 10] and Bayesian nonparametrics [11, 12, 13, 14]. However, there has been
limited consideration of scaling to large p settings, with the variational Bayes approach of [9] being
a notable exception. For dimensionality reduction, [9] follow a greedy variable selection algorithm.
Their approach does not scale to the sized applications we are interested in. For example, in a problem with p = 1, 000 and n = 500, they reported a CPU time of 51.7 minutes for a single analysis.
We are interested in problems with p having many more orders of magnitude, requiring a faster
computing time while also accommodating flexible nonlinear dimensionality reduction (variable selection is a limited sort of dimension reduction). To our knowledge, there are no nonparametric
density regression competitors to our approach, which maintain a characterization of uncertainty in
estimating the conditional densities; rather, all sufficiently scalable algorithms provide point predictions and/or rely on restrictive assumptions such as linearity.
In big data problems, scaling is often accomplished using divide-and-conquer techniques. However,
as the number of features increases, the problem of finding the best splitting attribute becomes
intractable, so that CART, MARS and multiple tree models cannot be efficiently applied. Similarly,
mixture of experts becomes computationally demanding, since both mixture weights and dictionary
densities are predictor dependent. To improve efficiency, sparse extensions relying on different
variable selection algorithms have been proposed [15]. However, performing variable selection in
high dimensions is effectively intractable: algorithms need to efficiently search for the best subsets
of predictors to include in weight and mean functions within a mixture model, an NP-hard problem
[16].
In order to efficiently deal with massive datasets, we propose a novel multiscale approach which
starts by learning a multiscale dictionary of densities. This tree is efficiently learned in a first stage
using a fast and scalable graph partitioning algorithm applied to the high-dimensional observations
[17]. Expressing the conditional densities f (y|x) for each x ? X as a convex combination of
coarse-to-fine scale dictionary densities, the learning problem in the second stage estimates the
corresponding multiscale probability tree. This is accomplished in a Bayesian manner using a novel
multiscale stick-breaking process, which allows the data to inform about the optimal bias-variance
tradeoff; weighting coarse scale dictionary densities more highly decreases variance while adding
to bias. This results in a model that borrows information across different resolution levels and
reaches a good compromise in terms of the bias-variance tradeoff. We show that the algorithm
scales efficiently to millions of features.
2
Setting
Let X : ? ? X ? Rp be a p-dimensional Euclidean vector-valued predictor random variable, taking
values x ? X , with a marginal probability distribution fX . Similarly, let Y : ? ? Y be a targetvalued random variable (e.g., Y ? R). For inferential expedience, we posit the existence of a latent
variable ? : ? ? M ? X , where M is only d ?dimensional? and d p. Note that M need not be a
linear subspace of X , rather, M could be, for example, a union or affine subspaces, or a smooth compact Riemannian manifold. Regardless of the nature of M, we assume that we can approximately
decompose the joint distribution as follows, fX,Y,? = fX,Y |? f? = fY |X,? fX|? f? ? fY |? fX|? f? .
Hence, we assume that the signal approximately concentrates around a low-dimensional latent
space, fY |X,? = fY |? . This is a much less restrictive assumption than the commonplace assumption
in manifold learning that the marginal distribution fX concentrates around a low-dimensional latent
space.
To provide some intuition for our model, we provide the following concrete example where the
distribution of y ? R is a Gaussian function of the coordinate ? ? M along the swissroll, which is
embedded in a high-dimensional ambient space. Specifically, we sample the manifold coordinate,
? ? U (0, 1). We sample x = (x1 , . . . , xp )T as follows
x1 = ? sin(?) ; x2 = ? cos(?) ; xr ? N (0, 1) r ? {3, . . . , p}
Finally, we sample y from N (?(?), ?(?)). Clearly, x and y are conditionally independent given ?,
which is the low-dimensional signal manifold. In particular, x lives on a swissroll embedded in a
2
p-dimensional ambient space, but y is only a function of the coordinate ? along the swissroll M.
The left panels of Figure 1 depict this example when ?(?) = ? and ?(?) = ? + 1.
Figure 1: Illustration of our generative model and algorithm on a swissroll. The top left panel
shows the manifold M (a swissroll) embedded in a p-dimensional ambient space, where the color
indicates the coordinate along the manifold, ? (only the first 3 dimensions are shown for visualization
purposes). The bottom left panel shows the distribution of y as a function of ?, in particular, fY |? =
N (?, ? + 1). The middle and right panels show our estimates of fY |? at scales 3 and 4, respectively,
which follow from partitioning our data. Sample size was n = 10, 000.
3
Goal
Our goal is to develop an approach to learn about fY |X from n pairs of observations that we assume are exchangeable samples from the joint distribution, (xi , yi ) ? fX,Y ? F. Let Dn =
{(xi , yi )}i?[n] , where [n] = {1, . . . , n}. More specifically, we seek to obtain a posterior over fY |X .
We insist that our approach satisfies several desiderata, including most importantly: (i) scales up
to p ? 106 in reasonable time, (ii) yields good empirical results, and (iii) automatically adapts to
the complexity of the data corpus. To our knowledge, no extant approach for estimating conditional
densities or posteriors thereof satisfies even our first criterion.
4
4.1
Methodology
Ms. Deeds Framework
We propose here a general modular approach which we refer to as multiscale dictionary learning for
estimating conditional distributions (?Ms. Deeds?). Ms. Deeds consists of two components: (i) a
tree decomposition of the space, and (ii) an assumed form of the conditional probability model.
Tree Decomposition A tree decomposition ? yields a multiscale partition of the data or the ambient space in which the data live. Let (W, ?W , FW ) be a measurable metric space, where FW is a
Borel probability measure, W, and ?W : W ?W ? R is a metric on W. Let BrW (w) be the ?W -ball
inside W of radius r > 0 centered at w ? W. For example, W could be the data corpus Dn , or it
could be X ? Y. We define a tree decomposition as in [2, 18]. A partition tree ? of W consists of a
collection of cells, ? = {Cj,k }j?Z,k?Kj . At each scale j, the set of cells Cj = {Cj,k }k?Kj provides
a disjoint partition of W almost everywhere. We define j = 0 as the root node. For each j > 0,
each set has a unique parent node. Denote
Aj,k = {(j 0 , k 0 ) : Cj,k ? Cj 0 ,k0 , j 0 < j} , Dj,k = {(j 0 , k 0 ) : Cj 0 ,k0 ? Cj,k , j 0 > j}
respectively the ancestors and the descendants of node (j, k).
3
Unlike classical harmonic theory which presupposes ? (e.g., in wavelets [19]), we choose to learn
? from the data. Previously, Chen et al. [18] developed a multiscale measure estimation strategy,
and proved that there exists a scale j such that the approximate measure is within some bound of
the true measure, under certain relatively general assumptions. We decided to simply partition the
x?s, ignoring the y?s in the partitioning strategy. Our justification for this choice is as follows. First,
sometimes there are many different y?s for many different applications. In such cases, we do not
want to bias the partitioning to any specific y?s, all the more so when new unknown y?s may later
emerge. Second, because the x?s are so much higher dimensional than the y?s in our applications of
interest, the partitions would be dominated by the x?s, unless we chose a partitioning strategy that
emphasized the y?s. Thus, our strategy mitigates this difficulty (while certainly introducing others).
Given that we are going to partition using only the x?s, we still face the choice of precisely how
to partition. A fully Bayesian approach would construct a large number of partitions, and integrate
over them to obtain posteriors. However, such a fully Bayesian strategy remains computationally intractable at scale, so we adopt a hybrid strategy. Specifically, we employ METIS [17], a well-known
relatively efficient multiscale partitioning algorithm with demonstrably good empirical performance
on a wide range of graphs. Given n observations, i.e. xi = (xi1 , . . . , xip )T ? X for i ? [n], the
graph construction follows via computing all pairwise distances using ?(xu , xv ) = k?
xu ? x
?v k2 ,
where x
? is the whitened x (i.e., mean subtracted and variance normalized). We let there be an edge
2
between xu and xv whenever e??(xu ,xv ) > t, where t is some threshold chosen to elicit the desired
sparsity level. Applying METIS recursively on the graph constructed in this way yields a single tree
(see supplementary material for further details).
Conditional Probability Model Given the tree decomposition of the data, we place a nonparametric prior over the tree. Specifically, we define fY |X as
X
fY |X =
?j,kj (x) fj,kj (x) (y|x)
(1)
j?Z
where kj (x)
P is the set at scale j where x has been allocated and ?j,kj (x) are weights across scales
such that j?Z ?j,kj (x) = 1. We let weights in Eq. (1) be generated by a stick-breaking process
[20]. For each node Cj,k in the partition tree, we define a stick length Vj,k ? Beta(1, ?). The
parameter ? encodes the complexity of the model, with ? = 0 corresponding to the case in which
f (y|x) = f (y). The stick-breaking process is defined as
Y
?j,k = Vj,k
[1 ? Vj 0 ,k0 ] ,
(2)
(j 0 ,k0 )?Aj,k
P
where (j 0 ,k0 )?Aj,k ?j 0 ,k0 = 1. The implication of this is that each scale within a path is weighted
to optimize the bias/variance trade-off across scales. We refer to this prior as a multiscale stickbreaking process. Note that this Bayesian nonparametric prior assigns a positive probability to all
possible paths, including those not observed in the training data. Thus, by adopting this Bayesian
formulation, we are able to obtain posterior estimates for any newly observed data, regardless of
the amount and variability of training data. This is a pragmatically useful feature of the Bayesian
formulation, in addition to the alleviation of the need to choose a scale [18].
Each fj,k in Eq. (1) is an element of a family of distributions. This family might be quite general,
e.g., all possible conditional densities, or quite simple, e.g., Gaussian distributions. Moreover, the
family can adapt with j or k, being more complex at the coarser scales (for which nj,k ?s are larger),
and simpler for the finer scales (or partitions with fewer samples). We let the family of conditional
densities for y be Gaussian for simplicity, that is, we assume that fj,k = N (?j,k , ?j,k ) with ?j,k ? R
and ?j,k ? R+ . Because we are interested in posteriors over the conditional distribution fY |X , we
place relatively uninformative but conjugate priors on ?j,k and ?j,k , specifically, assuming the y?s
have been whitened and are unidimensional, ?j,k ? N (0, 1) and ?j,k = IG(a, b). Obviously, other
choices, such as finite or infinite mixtures of Gaussians are also possible for continuous valued data.
4.2
Inference
We introduce the latent variable `i ? Z, for i = [n], denoting the multiscale level used by the ith
observation. Let nj,k be the number of observations in Cj,k . Let kh (xi ) be a variable indicating the
4
set at level h where xi has been allocated. Each Gibbs sampler iteration can be summarized in the
following steps:
(i) Update `i by sampling from the multinomial full conditional:
X
Pr(`i = j | ?) = ?j,kj (xi ) fj,kj (xi ) (yi |xi )/
?s,ks (xi ) fs,ks (xi ) (yi |xi )
s?Z
0
0
(ii) Update stick-breaking random variable
P Vj,k , for any j ? Z and k ? Kj , from Beta(? , ? )
0
0
with ? = 1 + nj,k and ? = ? + (r,s)?Dj,k nr,s .
(iii) Update ?j,k and ?j,k , for any j ? Z and k ? Kj , by sampling from
P
2
?j,k ? N (?j,k ?j,k y?j,k , ?j,k ) , ?j,k ? IG a? , b + 0.5 i?Ij,k (yi ? ?j,k )
where ?j,k = (1 + ?j,k )?1 , ?j,k = nj,k /?j,k a? = a + nj,k /2, y?j,k being the average of
the observations {yi } allocated to cell Cj,k and Ij,k = {i : `i = j, xi ? Cj,k }.
To make predictions, the Gibbs sampler was run with up to 20, 000 iterations, including a burnin of 1, 000 (see Supplementary material for details). Gibbs sampler chains were stopped testing
normality of normalized averages of functions of the Markov chain [21]. Parameters (a, b) and ?
involved in the prior density of parameters ?j,k ?s and Vj,k ?s were set to (3, 1) and 1, respectively.
All predictions used a leave-one-out strategy.
4.3
Simulation Studies
In order to assess the predictive performance of the proposed model, we considered the four different
simulation scenarios described below:
(1) Nonlinear Mixture We first consider a relatively simple yet nonlinear joint model, with a conditional Gaussian distribution y|? ? |?|N (?1 , ?1 ) + (1 ? |?|)N (?2 , ?2 ), a marginal distribution for
each dimension of x, xr |? ? N (?, ?x ), r ? {1, 2, . . . , p}, and a uniform distribution over the latent manifold ? ? sin(U (0, c)). In the simulations we let (?1 , ?1 ) = (?2, 1), (?2 , ?2 ) = (2, 1),
?x = 0.1, and c = 20, and p = 1000. Thus, fY |X is a highly nonlinear function of x, and even ?,
and x is high-dimensional.
(2) Swissroll We then return to the swissroll example of Figure 1; in Figure 3 we show results for
(?, ?) = (?, 1).
(3) Linear Subspace Letting ? ? Rp+1?q and ? be a q ? d ?diagonal? matrix (meaning all entires
other than the first d < q elements of the diagonal are zero), we assume the following model:
Y, X|? ? Np+1 (???, I), where ? ? Sp+1,d indicates ? is uniformly sampled from the set of all
orthonormal d frames in Rp+1 (a Stiefel manifold), ?ii ? IG(a? , b? ) for i ? {1, . . . , d} and all other
elements of ? are zero, and ? ? Nd (0, I). In the simulation, we let q = d = 5, (?? , ?? ) = (1, 0.25).
(4) Union of Linear Subspaces This model is a direct extension of the linear subspace model,
as it is a union of subspaces. We let the dimensionality of each subspace vary to demonstrate
PG
the generality of our procedure. Specifically, we assume Y, X|? ?
g=1 ?g Np+1 (?g ?g ?, I),
? ? Dirichlet(?), ? ? Nd (0, I), where ? ? Sp+1,g and ?g is ?diagonal? with ?ii ? IG(ag , bg )
for i ? {1, . . . , g}, and the remaining elements of ? are zero. In the simulation, we let G = 5,
? = (1, . . . , 1)T , (?g , ?g ) = (?? , ?? ) as above.
4.4
Neuroscience Applications
We assessed the predictive performance of the proposed method on two very different neuroimaging
datasets. For all analyses, each variable was normalized by subtracting its mean and dividing by
its standard deviation. The prior specification and Gibbs sampler described in ?4.1 and 4.2 were
utilized.
In the first experiment we investigated the extent to which we could predict creativity (as measured
via the Composite Creativity Index [22]) via a structural connectome dataset collected at the Mind
Research Network (data were collected as described in Jung et al. [23]). For each subject, we
estimate a 70 vertex undirected weighted brain-graph using the Magnetic Resonance Connectome
Automated Pipeline (MRCAP) [24] from diffusion tensor imaging data [25]. Because our graphs are
5
undirected and lack self-loops, we have a total of p = 70
2 = 2, 415 potential weighted edges. The
p-dimensional feature vector is defined by the natural logarithm of the vectorized matrix described
above.
The second dataset comes from a resting-state functional magnetic resonance experiment as part of
the Autism Brain Imaging Data Exchange [26]. We selected the Yale Child Study Center for analysis. Each brain-image was processed using the Configurable Pipeline for Analysis of Connectomes
(CPAC) [27]. For each subject, we computed a measure of normalized power at each voxel called
fALFF [28]. To ensure the existence of nonlinear signal relating these predictors, we let yi correspond to an estimate of overall head motion in the scanner, called mean framewise displacement
(FD) computed as described in Power et al. [29]. In total, there were p = 902, 629 voxels.
4.5
Evaluation Criteria
A
A
defined as rm
= ?(M SB)/?(A), where
To compare algorithmic performance we considered rm
? is the quantity of interest (for example, CPU time in seconds or mean squared error), MSB is
our approach and A is the competitor algorithm. To obtain mean-squared error estimates from
MSB, we select our posterior mean as a point-estimate (the comparison algorithms do not generate
posterior predictions, only point estimates). For each simulation scenario, we sampled multiple
A
. In other words, rather than running simulations
datasets and compute the matched distribution of rm
and reporting the distribution of performance for each algorithm, we compare the algorithms per
simulation. This provides a much more informative indication of algorithmic performance, in that
we indicate the fraction of simulations one algorithm outperforms another on some metric. For each
A
example, we sampled 20 datasets to obtain estimates of the distribution over rm
. All experiments
were performed on a typical workstation, Intel Core i7-2600K Quad-Core Processor with 8192 MB
of RAM.
5
5.1
Results
Illustrative Example
The middle and right panels of Figure 1 depict the quality of partitioning and density estimation
for the swissroll example described in ?2, with the ambient dimension p = 1000 and the predictive
manifold dimension d = 1. We sampled n = 104 samples for this illustration. At scale 3 we have 4
partitions, and at scale 4 we have 8 (note that the partition tree, in general, need not be binary). The
top panels are color coded to indicate which xi ?s fall into which partition. Although imperfect, it
should be clear that the data are partitioned very well. The bottom panels show the resulting estimate
of the posteriors at the two scales. These posteriors are piecewise constant, as they are invariant to
the manifold coordinate within a given partition.
To obviate the need to choose a scale to use to make a prediction, we choose to adopt a Bayesian
approach and integrate across scales. Figure 2 shows the estimated density of two observations
of model (1) with parameters (?1 , ?1 ) = (?2, 1), (?2 , ?2 ) = (2, 1), ?x = 0.1, and c = 20 for
different sample sizes. Posteriors of the conditional density fY |X were computed for various sample
sizes. Figure 2 suggests that our estimate of fY |X approaches the true density as the number of
observations in the training set increases. We are unable to compare our strategy for posterior
estimation to previous literature because we are unaware of previous Bayesian approaches for this
problem that scale up to problems of this size. Therefore, we numerically compare the performance
of our point-estimates (which we define as the posterior mean of f?Y |X ) with the predictions of the
competitor algorithms.
5.2
Quantitative Comparisons for Simulated Data
Figure 3 compares the numerical performance of our algorithm (MSB) with Lasso (black), CART
(red), and PC regression (green) in terms of both mean-squared error (top) and CPU time (bottom)
for models (2), (3), and (4) in the left, middle, and right panels respectively. These figures show
relative performance on a per simulation basis, thus enabling a much more powerful comparison
than averaging performance for each algorithm over a set of simulations. Note that these three
simulations span a wide range of models, including nonlinear smooth manifolds such as the swissroll
6
f(y|?=?0.9)
n=100
0.9
0.6
0.6
0.6
0.3
0.3
?2
0
2
4
0
?4
0.3
?2
0
2
0
?4
4
0.6
0.6
0.6
0.3
0.3
0.3
0
?4
?2
0
y
2
4
0
?4
Figure 2: Illustrative example of model (1) suggesting that our posterior estimates of the conditional density are converging as n increases
even when fY |? is highly nonlinear and fX|?
is very high-dimensional. True (red) and estimated (black) density (50th percentile: solid
line, 2.5th and 97.5th percentiles: dashed lines)
for two data positions along the manifold (top
panels: ? ? ?0.9, bottom panels: ? ? 0.5)
considering different training set sizes.
n=200
0.9
0
?4
f(y|?=0.5)
n=150
0.9
?2
0
2
4
0
?4
?2
0
2
4
?2
0
2
4
(model 2), relatively simple linear subspace manifolds (model 3), and a union of linear subspaces
model (model 4 ; which is neither linear nor a manifold).
In terms of predictive accuracy, the top panels show that for all three simulations, in every dimensionality that we considered?including p = 0.5 ? 106 ?MSB is more accurate than either Lasso,
CART, or PC regression. Note that this is the case even though MSB provides much more information about the posterior fY |X , yielding an entire posterior over fY |X , rather than merely a point
estimate.
In terms of computational time, MSB is much faster than the competitors for large p and n, as shown
in the bottom three panels. The supplementary materials show that computational time for MSB is
relatively constant as a function of p, whereas Lasso?s computational time grows considerably with
p. Thus, for large enough p, MSB is significantly faster that Lasso. MSB is faster than CART and
PC regression for all p and n under consideration. Thus, it is clear from these simulations that MSB
has better scaling properties?in terms of both predictive accuracy and computational time?than
the competitor methods.
MSE Ratio
(2) Swissroll
(3) Linear Subspace (4) Union of Linear Subspaces
1
1
1
0
50k
p
100k
0
50k
p
100k
0
50k
p
100k
4
1
Time Ratio
1
2
0
100 200 300
sample size
0
100k 200k 300k
p
0
100 200 300
sample size
Figure 3: Numerical results for various simulation scenarios. Top plots depict the relative meansquared error of MSB (our approach), versus CART (red), Lasso (black), and PC regression (green)
for as a function of ambient dimension of x. Bottom plots depict the ratio of CPU time as a function
of sample size. The three simulation scenarios are: swissroll (left), linear subspaces (middle), union
of linear subspaces (right). MSB outperforms both CART, Lasso, and PC regression in all three
A
scenarios regardless of ambient dimension (rmse
< 1 for all p). MSB compute time is relatively
constant as n or p increase, whereas Lasso?s compute time increases, thus, as n or p increase, MSB
CPU time becomes less than Lasso?s. MSB was always significantly faster than CART and PC
regression, regardless of n or p. For all panels, n = 100 when p varies, and p = 300k when n
varies, where k indicates 1000, e.g., 300k= 3 ? 105 .
7
Table 1: Neuroscience application quantitative performance comparisons. Squared error predictive
accuracy per subject (using leave-one-out) was computed. We report the mean and standard deviation (s.d.) across subjects of squared error, and CPU time (in seconds). We compare multiscale
stick-breaking (MSB), CART, Lasso, random forest (RF), and PC regression. MSB outperforms all
the competitors in terms of predictive accuracy and scalability. Only MSB and Lasso even ran for
the ? 106 dimensional application. Bold indicates best MSE, ? indicates best CPU time.
5.3
DATA
CREATIVITY
n
108
p
2,415
MOVEMENT
56
? 106
MODEL
MSB
CART
L ASSO?
RF
PC REGRESSION
MSB?
L ASSO
MSE ( S . D .)
0.56 (0.85)
1.10 (1.00)
0.63 (0.95)?
0.57(0.90)
0.65 (0.88)
0.76 (0.90)?
1.02 (0.98)
TIME ( S . D .)
1.1 (0.02)
0.9 (0.01)
0.40 (0.10)?
78.2 (0.59)
0.46 (0.37)
20.98 (2.31)?
96.18 (9.66)
Quantitative Comparisons for Neuroscience Applications
Table 1 shows the mean and standard deviation of point-estimate predictions per subject (using
leave-one-out) for the two neuroscience applications that we investigated: (i) predicting creativity
from diffusion MRI (creativity) and, (ii) predicting head motion based on functional MRI (movement). For the creativity application, p was relatively small, ?merely? 2, 415, so we could run Lasso,
CART, and random forests (RF) [30]. For the movement application, p was nearly one million.
For both applications, MSB yielded improved predictive accuracy over all competitors. Although
CART and Lasso were faster than MSB on the relatively low-dimensional predictor example (creativity), their computational scaling was poor, such that CART yielded a memory fault on the higherdimensional case, and Lasso required substantially more time than MSB.
6
Discussion
In this work we have introduced a general formalism to estimate conditional distributions via multiscale dictionary learning. An important property of any such strategy is the ability to scale up to
ultrahigh-dimensional predictors. We considered simulations and real-data examples where the dimensionality of the predictor space approached one million. To our knowledge, no other approach
to learn conditional distributions can run at this scale. Our approach explicitly assumes that the posterior fY |X can be well approximated by projecting x onto a lower-dimensional space, fY |X ? fY |? ,
where ? ? M ? Rd , and x ? Rd . Note that this assumption is much less restrictive than assuming
that x is close to a low-dimensional space; rather, we only assume that the part of fX that ?matters? to predict y lives near a low-dimensional subspace. Because a fully Bayesian strategy remains
computationally intractable at this scale, we developed an empirical Bayes approach, estimating the
partition tree based on the data, but integrating over scales and posteriors.
We demonstrate that even though we obtain posteriors over the conditional distribution fY |X , our
approach, dubbed multiscale stick-breaking (MSB), outperforms several standard machine learning
algorithms in terms of both predictive accuracy and computational time, as the sample size (n) and
ambient dimension (p) increase. This improvement was demonstrated when the M was a swissroll,
a latent subspace, a union of latent subspaces, and real data (for which the latent space may not even
exist).
In future work, we will extend these numerical results to obtain theory on posterior convergence.
Indeed, while multiscale methods benefit from a rich theoretical foundation [2], the relative advantages and disadvantages of a fully Bayesian approach, in which one can estimate posteriors over all
functionals of fY |X at all scales, remains relatively unexplored.
References
[1] I. U. Rahman, I. Drori, V. C. Stodden, and D. L. Donoho. Multiscale representations for manifold- valued
data. SIAM J. Multiscale Model, 4:1201?1232, 2005.
8
[2] W.K. Allard, G. Chen, and M. Maggioni. Multiscale geometric methods for data sets II: geometric
wavelets. Applied and Computational Harmonic Analysis, 32:435?462, 2012.
[3] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixture of local experts. Neural
Computation, 3:79?87, 1991.
[4] W. X. Jiang and M. A. Tanner. Hierarchical mixtures-of-experts for exponential family regression models:
approximation and maximum likelihood estimation. Annals of Statistics, 27:987?1011, 1999.
[5] J. Q. Fan, Q. W. Yao, and H. Tong. Estimation of conditional densities and sensitivity measures in
nonlinear dynamical systems. Biometrika, 83:189?206, 1996.
[6] M. P. Holmes, G. A. Gray, and C. L. Isbell. Fast kernel conditional density estimation: a dual-tree Monte
Carlo approach. Computational statistics & data analysis, 54:1707?1718, 2010.
[7] G. Fu, F. Y. Shih, and H. Wang. A kernel-based parametric method for conditional density estimation.
Pattern recognition, 44:284?294, 2011.
[8] D. J. Nott, S. L. Tan, M. Villani, and R. Kohn. Regression density estimation with variational methods
and stochastic approximation. Journal of Computational and Graphical Statistics, 21:797?820, 2012.
[9] M. N. Tran, D. J. Nott, and R. Kohn. Simultaneous variable selection and component selection for
regression density estimation with mixtures of heteroscedastic experts. Electronic Journal of Statistics,
6:1170?1199, 2012.
[10] A. Norets and J. Pelenis. Bayesian modeling of joint and conditional distributions. Journal of Econometrics, 168:332?346, 2012.
[11] J. E. Griffin and M. F. J. Steel. Order-based dependent Dirichlet processes. Journal of the American
Statistical Association, 101:179?194, 2006.
[12] D. B. Dunson, N. Pillai, and J. H. Park. Bayesian density regression. Journal of the Royal Statistical
Society Series B-Statistical Methodology, 69:163?183, 2007.
[13] Y. Chung and D. B. Dunson. Nonparametric Bayes conditional distribution modeling with variable selection. Journal of the American Statistical Association, 104:1646?1660, 2009.
[14] S. T. Tokdar, Y. M. Zhu, and J. K. Ghosh. Bayesian density regression with logistic Gaussian process and
subspace projection. Bayesian Analysis, 5:319?344, 2010.
[15] I. Mossavat and O. Amft. Sparse bayesian hierarchical mixture of experts. IEEE Statistical Signal Processing Workshop (SSP), 2011.
[16] Isabelle Guyon and Andr?e Elisseeff. An introduction to variable and feature selection. The Journal of
Machine Learning Research, 3:1157?1182, 2003.
[17] G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs.
SIAM Journal on Scientific Computing 20, 1:359392, 1999.
[18] G. Chen, M. Iwen, S. Chin, and M. Maggioni. A fast multiscale framework for data in high-dimensions:
Measure estimation, anomaly detection, and compressive measurements. In VCIP, 2012 IEEE, 2012.
[19] Ingrid Daubechies. Ten Lectures on Wavelets (CBMS-NSF Regional Conference Series in Applied Mathematics). SIAM: Society for Industrial and Applied Mathematics, 1992.
[20] J. Sethuraman. A constructive denition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[21] Didier Chauveau and Jean Diebolt. An automated stopping rule for mcmc convergence assessment. Computational Statistics, 14:419?442, 1998.
[22] R. Arden, R. S. Chavez, R. Grazioplene, and R. E. Jung. Neuroimaging creativity: a psychometric view.
Behavioural brain research, 214:143?156, 2010.
? Chavez, and R.J.
? Jung, R. Grazioplene, A. Caprihan, R.S.
?
[23] R.E.
Haier. White matter integrity, creativity, and psychopathology: Disentangling constructs with diffusion
tensor imaging. PloS one, 5(3):e9818, 2010.
? Gray, J.A.
? Bogovic, J.T.
? Vogelstein, B.A.
? Landman, J? L. Prince, and R.J.
? Vogelstein. Magnetic
[24] W.R.
resonance connectome automated pipeline: an overview. IEEE pulse, 3(2):42?8, March 2010.
[25] Susumu Mori and Jiangyang Zhang. Principles of diffusion tensor imaging and its applications to basic
neuroscience research. Neuron, 51(5):527?39, September 2006.
[26] ABIDE. http://fcon 1000.projects.nitrc.org/indi/abide/.
? Vogelstein, and M.P.
? Milham. Towards Automated Analysis of Connectomes: The Config[27] S. Sikka, J.T.
urable Pipeline for the Analysis of Connectomes (C-PAC). Neuroinformatics, 2012.
?
[28] Q-H. Zou, C-Z. Zhu, Y. Yang, X-N. Zuo, X-Y. Long, Q-J. Cao, Y-FWang,
and Y-F. Zang. An improved
approach to detection of amplitude of low-frequency fluctuation (ALFF) for resting-state fMRI: fractional
ALFF. Journal of neuroscience methods, 172(1):137?141, July 2008.
[29] J. D. Power, K. A. Barnes, C. J. Stone, and R. A. Olshen. Spurious but systematic correlations in functional
connectivity MRI networks arise from subject motion. Neuroimage, 59:2142?2154, 2012.
[30] Leo Breiman. Statistical Modeling : The Two Cultures. Statistical Science, 16(3):199?231, 2001.
9
| 4944 |@word mri:3 middle:4 villani:1 nd:2 seek:1 carolina:2 simulation:17 decomposition:7 jacob:1 pg:1 elisseeff:1 pulse:1 solid:1 recursively:1 reduction:3 series:2 denoting:1 outperforms:4 nowlan:1 yet:1 numerical:3 partition:15 informative:1 shape:1 plot:2 depict:4 update:3 msb:24 intelligence:1 greedy:1 generative:1 fewer:1 selected:1 ith:1 core:2 characterization:1 coarse:2 provides:3 node:4 didier:1 org:1 simpler:1 zhang:1 along:4 dn:2 constructed:1 beta:2 direct:1 framewise:1 ingrid:1 descendant:1 consists:2 inside:1 introduce:1 manner:2 pairwise:1 indeed:1 nor:1 brain:5 relying:1 insist:1 automatically:1 cpu:7 quad:1 considering:1 becomes:3 project:1 estimating:5 moreover:1 linearity:1 panel:13 matched:1 substantially:1 emerging:1 developed:3 compressive:1 finding:1 ag:1 dubbed:1 nj:5 ghosh:1 quantitative:3 every:1 unexplored:1 biometrika:1 k2:1 rm:4 stick:7 partitioning:9 exchangeable:1 positive:1 local:1 xv:3 jiang:1 path:3 becoming:1 approximately:3 fluctuation:1 might:2 chose:1 mssm:1 black:3 k:2 suggests:1 challenging:1 heteroscedastic:1 co:1 limited:2 range:2 karypis:1 decided:1 unique:1 testing:1 union:7 xr:2 procedure:1 displacement:1 drori:1 area:1 empirical:3 elicit:1 significantly:2 composite:1 inferential:1 deed:3 word:1 integrating:1 projection:1 diebolt:1 cannot:1 onto:1 selection:8 close:1 live:1 applying:1 optimize:1 measurable:1 demonstrated:2 center:1 regardless:4 flexibly:2 convex:2 resolution:1 simplicity:1 splitting:1 assigns:1 rule:1 holmes:1 importantly:1 orthonormal:1 zuo:1 obviate:1 maggioni:2 fx:9 coordinate:5 justification:1 annals:1 target:1 construction:1 tan:1 massive:3 anomaly:1 duke:4 element:4 approximated:1 recognition:1 utilized:1 econometrics:1 coarser:1 bottom:6 observed:2 wang:1 commonplace:1 plo:1 decrease:1 trade:1 movement:3 ran:1 intuition:1 complexity:2 compromise:1 predictive:12 efficiency:1 basis:2 joint:4 k0:6 various:2 leo:1 fast:5 monte:1 approached:1 neuroinformatics:1 quite:2 modular:1 supplementary:3 valued:3 presupposes:1 larger:1 jean:1 otherwise:1 ability:1 statistic:5 obviously:1 advantage:1 indication:1 propose:3 subtracting:1 tran:1 product:1 mb:1 cao:1 loop:1 poorly:1 adapts:1 indi:1 kh:1 scalability:1 parent:1 convergence:2 plethora:1 leave:3 develop:1 stat:1 measured:1 ij:2 school:1 eq:2 dividing:1 come:1 indicate:2 concentrate:3 posit:1 radius:1 attribute:1 stochastic:1 centered:1 material:3 exchange:1 multilevel:1 creativity:9 decompose:1 ultra:1 extension:2 scanner:1 sufficiently:2 around:3 considered:4 algorithmic:2 predict:2 dictionary:9 adopt:2 vary:1 purpose:1 estimation:14 stickbreaking:1 weighted:3 clearly:1 genomic:1 gaussian:5 always:1 rather:5 nott:2 avoid:1 breiman:1 improvement:1 indicates:5 likelihood:1 industrial:2 inference:1 dependent:3 stopping:1 xip:1 sb:1 entire:2 spurious:1 ancestor:1 going:1 interested:4 overall:1 dual:1 flexible:1 resonance:3 art:1 marginal:3 construct:2 having:1 sampling:2 park:1 nearly:1 fmri:1 future:1 np:3 others:1 piecewise:1 report:1 employ:1 modern:1 maintain:1 detection:2 interest:3 fd:1 highly:3 evaluation:1 certainly:1 mixture:11 yielding:1 pc:8 chain:2 implication:1 accurate:1 ambient:9 edge:2 fu:1 culture:1 unless:1 tree:18 divide:1 euclidean:1 logarithm:1 desired:1 prince:1 theoretical:1 stopped:1 formalism:1 soft:1 modeling:3 disadvantage:1 introducing:1 deviation:3 subset:1 vertex:1 predictor:9 uniform:1 characterize:1 reported:1 configurable:1 varies:2 considerably:1 adaptively:1 density:35 siam:3 sensitivity:1 probabilistic:1 xi1:1 off:1 systematic:1 connectome:3 tanner:1 concrete:1 yao:1 extant:1 jo:1 squared:5 iwen:1 daubechies:1 connectivity:1 choose:4 possibly:1 expert:6 american:2 chung:1 return:1 toy:1 suggesting:1 potential:1 summarized:1 bold:1 north:2 gaussianity:1 matter:2 notable:1 explicitly:1 bg:1 later:1 root:1 performed:1 view:1 red:3 start:1 bayes:3 sort:1 rmse:1 ass:1 accuracy:6 variance:6 efficiently:6 yield:3 correspond:1 bayesian:18 carlo:1 autism:1 finer:1 processor:1 simultaneous:1 inform:1 reach:1 whenever:1 higherdimensional:1 competitor:7 frequency:1 involved:1 thereof:1 riemannian:1 workstation:1 sampled:4 newly:1 proved:1 dataset:2 knowledge:3 color:2 dimensionality:6 ubiquitous:1 cj:11 fractional:1 amplitude:1 cbms:1 higher:1 follow:2 methodology:2 response:4 improved:2 nonparametrics:1 formulation:2 though:2 mar:1 generality:1 stage:2 psychopathology:1 correlation:1 rahman:1 multiscale:20 nonlinear:9 lack:1 assessment:1 nonparametrically:1 pragmatically:1 logistic:1 aj:3 quality:2 gray:2 scientific:2 grows:1 requiring:1 true:3 normalized:4 hence:1 francesca:2 deal:1 conditionally:1 white:1 sin:2 self:1 illustrative:2 percentile:2 criterion:2 m:3 stone:1 chin:1 demonstrate:2 vo:1 motion:3 fj:4 stiefel:1 image:2 variational:2 consideration:2 novel:2 harmonic:2 meaning:1 common:1 multinomial:1 mt:1 functional:3 tokdar:1 overview:1 million:5 extend:1 association:2 resting:2 relating:1 numerically:1 expressing:1 refer:2 isabelle:1 measurement:1 gibbs:4 rd:2 mathematics:2 similarly:2 dj:2 specification:1 integrity:1 posterior:20 scenario:5 certain:1 binary:1 allard:1 life:2 fault:1 accomplished:2 joshua:1 yi:7 prune:1 dashed:1 vogelstein:4 signal:4 multiple:2 ii:7 full:1 july:1 smooth:2 asso:2 faster:6 adapt:1 long:1 coded:1 prediction:8 scalable:2 basic:1 regression:14 desideratum:1 patient:1 metric:3 whitened:2 converging:1 iteration:2 kernel:3 sometimes:1 adopting:1 cell:3 irregular:1 addition:1 want:1 fine:1 uninformative:1 whereas:2 crucial:1 allocated:3 unlike:1 regional:1 subject:7 cart:12 undirected:2 jordan:1 structural:1 near:1 yang:1 iii:2 embeddings:1 enough:1 automated:5 config:1 lasso:13 imperfect:1 unidimensional:1 tradeoff:2 i7:1 kohn:2 f:1 york:1 useful:1 stodden:1 clear:2 amount:1 nonparametric:5 ten:1 demonstrably:1 processed:1 generate:1 http:1 exist:1 andr:1 nsf:1 neuroscience:8 disjoint:1 per:4 estimated:2 pillai:1 zang:1 express:1 four:1 shih:1 threshold:1 susumu:1 neither:1 diffusion:4 imaging:4 milham:1 graph:8 ram:1 merely:2 fraction:1 swissroll:12 run:3 everywhere:1 uncertainty:2 powerful:1 place:2 almost:1 reasonable:1 family:5 reporting:1 electronic:1 guyon:1 parsimonious:1 griffin:1 scaling:4 bound:1 yale:1 fan:1 yielded:2 barnes:1 precisely:1 isbell:1 x2:2 encodes:1 dominated:1 span:1 kumar:1 performing:1 relatively:11 department:3 metis:2 combination:2 ball:1 poor:1 conjugate:1 march:1 across:5 partitioned:1 alleviation:1 projecting:1 invariant:1 pr:1 pipeline:4 computationally:3 behavioural:1 visualization:1 previously:2 remains:3 mori:1 mind:2 letting:1 available:1 gaussians:1 hierarchical:3 magnetic:3 subtracted:1 rp:4 existence:2 top:6 dirichlet:3 include:2 remaining:1 ensure:1 running:1 assumes:1 graphical:1 medicine:1 restrictive:3 conquer:1 classical:1 society:2 tensor:3 quantity:1 parametric:2 strategy:10 nr:1 diagonal:3 ssp:1 september:1 subspace:19 distance:1 unable:1 simulated:1 accommodating:1 manifold:15 fy:22 extent:1 collected:2 assuming:2 length:1 connectomes:3 index:1 illustration:2 ratio:3 sinica:1 dunson:4 neuroimaging:3 disentangling:1 olshen:1 abide:2 fcon:1 steel:1 unknown:1 nitrc:1 allowing:1 observation:8 neuron:1 datasets:5 markov:1 finite:2 enabling:1 hinton:1 variability:1 head:2 frame:1 david:1 introduced:1 pair:1 required:1 meansquared:1 learned:1 able:1 below:1 dynamical:1 pattern:1 sparsity:1 challenge:3 rf:3 including:8 green:2 video:1 memory:1 royal:1 power:3 demanding:1 difficulty:1 rely:1 hybrid:1 natural:1 predicting:2 zhu:2 normality:1 scheme:1 improve:1 misleading:1 chauveau:1 sethuraman:1 kj:11 prior:7 literature:2 voxels:1 geometric:2 relative:3 ultrahigh:1 embedded:3 fully:4 lecture:1 versus:1 borrows:1 foundation:1 integrate:2 affine:1 vectorized:1 xp:2 principle:1 genetics:2 jung:3 sinai:1 bias:5 allow:1 institute:1 wide:2 fall:1 taking:1 face:1 emerge:1 sparse:2 benefit:1 dimension:10 rich:2 unaware:1 collection:1 adaptive:1 ig:4 voxel:1 functionals:1 approximate:1 compact:1 corpus:2 assumed:1 xi:13 search:1 latent:8 continuous:1 table:2 nature:1 learn:3 neurologic:1 ignoring:1 forest:2 mse:3 investigated:2 complex:1 zou:1 vj:5 sp:2 statistica:1 big:2 arise:2 child:2 x1:3 xu:4 intel:1 psychometric:1 borel:1 brw:1 ny:1 tong:1 sub:1 position:1 neuroimage:1 exponential:1 breaking:6 weighting:1 wavelet:3 minute:1 specific:1 emphasized:1 pac:1 mitigates:1 intractable:4 exists:1 workshop:1 adding:1 effectively:1 magnitude:1 chavez:2 chen:3 durham:2 phenotype:2 simply:1 desire:1 satisfies:2 conditional:27 sized:1 goal:2 donoho:1 towards:1 change:2 hard:1 fw:2 specifically:6 infinite:1 uniformly:1 typical:1 sampler:4 averaging:1 total:2 called:2 petralia:2 burnin:1 exception:1 indicating:1 highdimensional:1 select:1 arises:1 assessed:1 constructive:1 mcmc:1 |
4,359 | 4,945 | On the Sample Complexity of Subspace Learning
Guillermo D. Canas
Massachussetss Institute of Technology
[email protected]
Alessandro Rudi
Robotics Brain and Cognitive Science
Istituto Italiano di Tecnologia
[email protected]
Lorenzo Rosasco
Universita? degli Studi di Genova, LCSL,
Massachusetts Institute of Technology & Istituto Italiano di Tecnologia
[email protected]
Abstract
A large number of algorithms in machine learning, from principal component
analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral
embedding and support estimation methods, rely on estimating a linear subspace
from samples. In this paper we introduce a general formulation of this problem
and derive novel learning error estimates. Our results rely on natural assumptions
on the spectral properties of the covariance operator associated to the data distribution, and hold for a wide class of metrics between subspaces. As special cases, we
discuss sharp error estimates for the reconstruction properties of PCA and spectral
support estimation. Key to our analysis is an operator theoretic approach that has
broad applicability to spectral learning methods.
1
Introduction
The subspace learning problem is that of finding the smallest linear space supporting data drawn
from an unknown distribution. It is a classical problem in machine learning and statistics, with
several established algorithms addressing it, most notably PCA and kernel PCA [12, 18]. It is also
at the core of a number of spectral methods for data analysis, including spectral embedding methods, from classical multidimensional scaling (MDS) [7, 26], to more recent manifold embedding
methods [22, 16, 2], and spectral methods for support estimation [9]. Therefore knowledge of the
speed of convergence of the subspace learning problem, with respect to the sample size, and the
algorithms? parameters, is of considerable practical importance.
Given a measure ? from which independent samples are drawn, we aim to estimate the smallest
subspace S? that contains the support of ?. In some cases, the support may lie on, or close to, a
subspace of lower dimension than the embedding space, and it may be of interest to learn such a
subspace S? in order to replace the original samples by their local encoding with respect to S? .
While traditional methods, such as PCA and MDS, perform such subspace estimation in the data?s
original space, other, more recent manifold learning methods, such as isomap [22], Hessian eigenmaps [10], maximum-variance unfolding [24, 25, 21], locally-linear embedding [16, 17], and Laplacian eigenmaps [2] (but also kernel PCA [18]), begin by embedding the data in a feature space, in
which subspace estimation is carried out. Indeed, as pointed out in [11, 4, 3], the algorithms in this
family have a common structure. They embed the data in a suitable Hilbert space H, and compute
a linear subspace that best approximates the embedded data. The local coordinates in this subspace
then become the new representation space. Similar spectral techniques may also be used to estimate
the support of the data itself, as discussed in [9].
1
While the subspace estimates are derived from the available samples only, or their embedding, the
learning problem is concerned with the quality of the computed subspace as an estimate of S? (the
true span of the support of ?). In particular, it may be of interest to understand the quality of these
estimates, as a function of the algorithm?s parameters (typically the dimensionality of the estimated
subspace).
We begin by defining the subspace learning problem (Sec. 2), in a sufficiently general way to encompass a number of well-known problems as special cases (Sec. 4). Our main technical contribution is
a general learning rate for the subspace learning problem, which is then particularized to common
instances of this problem (Sec. 3). Our proofs use novel tools from linear operator theory to obtain
learning rates for the subspace learning problem which are significantly sharper than existing ones,
under typical assumptions, but also cover a wider range of performance metrics. A full sketch of the
main proofs is given in Section 7, including a brief description of some of the novel tools developed.
We conclude with experimental evidence, and discussion (Sec. 5 and 6).
2
Problem definition and notation
Given a measure ? with support M in the unit ball of a separable Hilbert space H, we consider in
this work the problem of estimating, from n i.i.d. samples Xn = {xi }1?i?n , the smallest linear
subspace S? := span(M ) that contains M .
The quality of an estimate S? of S? , for a given metric (or error criterion) d, is characterized in terms
of probabilistic bounds of the form
h
i
? ? ?(?, n, ?) ? 1 ? ?, 0 < ? ? 1.
P d(S? , S)
(1)
for some function ? of the problem?s parameters. We derive in the sequel high probability bounds of
the above form.
In the remainder the metric projection operator onto a subspace S is denoted by PS , where PS2 =
We denote by k ? kH the norm induced by the
PS? = PS (every P is idempotent and self-adjoint).
p
p
dot product < ?, ? >H in H, and by kAkp := Tr(|A|p ) the p-Schatten, or p-class norm of a linear
bounded operator A [15, p. 84].
2.1
Subspace estimates
Letting C := Ex?? x ? x be the (uncentered) covariance operator associated to ?, it is easy to show
Pn
that S? = Ran C. Similarly, given the empirical covariance Cn := n1 i=1 x ? x, we define the
empirical subspace estimate
S?n := span(Xn ) = Ran Cn
(note that the closure is not needed in this case because S?n is finite-dimensional). We also define
the k-truncated (kernel) PCA subspace estimate S?nk := Ran Cnk , where Cnk is obtained from Cn by
keeping only its k top eigenvalues. Note that, since the PCA estimate S?nk is spanned by the top k
0
eigenvectors of Cn , then clearly S?nk ? S?nk for k < k 0 , and therefore {S?nk }nk=1 is a nested family of
subspaces (all of which are contained in S? ).
As discussed in Section 4.1, since kernel-PCA reduces to regular PCA in a feature space [18] (and
can be computed with knowledge of the kernel alone), the following discussion applies equally to
kernel-PCA estimates, with the understanding that, in that case, S? is the span of the support of ? in
the feature space.
2.2
Performance criteria
In order for a bound of the form of Equation (1) to be meaningful, a choice of performance criteria
d must be made. We define the distance
d?,p (U, V ) := k(PU ? PV )C ? kp
(2)
between subspaces U, V , which is a metric over the space of subspaces contained in S? , for 0 ?
? ? 21 and 1 ? p ? ?. Note that d?,p depends on ? through C but, in the interest of clarity,
2
this dependence is omitted in the notation. While of interest in its own right, it is also possible
to express important performance criteria as particular cases of d?,p . In particular, the so-called
reconstruction error [13]:
? := Ex?? kPS (x) ? P ? (x)k2
dR (S? , S)
?
S
H
is dR (S? , ?) = d1/2,2 (S? , ?)2 .
Note that dR is a natural criterion because a k-truncated PCA estimate minimizes a suitable error
? vanishes whenever S? contains S? and,
dR over all subspaces of dimension k. Clearly, dR (S? , S)
k n
?
because the family {Sn }k=1 of PCA estimates is nested, then dR (S? , S?nk ) is non-increasing with k.
As shown in [13], a number of unsupervised learning algorithms, including (kernel) PCA, k-means,
k-flats, sparse coding, and non-negative matrix factorization, can be written as a minimization of dR
over an algorithm-specific class of sets (e.g. over the set of linear subspaces of a fixed dimension in
the case of PCA).
3
Summary of results
Our main technical contribution is a bound of the form of Eq. (1), for the k-truncated PCA estimate
S?nk (with the empirical estimate S?n := S?nn being a particular case), whose proof is postponed to
Sec. 7. We begin by bounding the distance d?,p between S? and the k-truncated PCA estimate S?nk ,
given a known covariance C.
Theorem 3.1. Let {xi }1?i?n be drawn i.i.d. according to a probability measure ? supported on
the unit ball of a separable Hilbert space H, with covariance C. Assuming n > 3, 0 < ? < 1,
0 ? ? ? 12 , 1 ? p ? ?, the following holds for each k ? {1, . . . , n}:
h
i
(3)
P d?,p (S? , S?nk ) ? 3t?
C ? (C + tI)??
p ? 1 ? ?
where t = max{?k , n9 log n? }, and ?k is the k-th top eigenvalue of C.
We say that C has eigenvalue decay rate of order r if there are constants q, Q > 0 such that
qj ?r ? ?j ? Qj ?r , where ?j are the (decreasingly ordered) eigenvalues of C, and r > 1. From
Equation (2) it is clear that, in order for the subspace learning problem to be well-defined, it must
be kC ? kp < ?, or alternatively: ?p > 1/r. Note that this condition is always met for p = ?, and
also holds in the reconstruction error case (? = 1/2, p = 2), for any decay rate r > 1.
Knowledge of an eigenvalue decay rate can be incorporated into Theorem 3.1 to obtain explicit
learning rates, as follows.
Theorem 3.2 (Polynomial eigenvalue decay). Let C have eigenvalue decay rate of order r. Under
the assumptions of Theorem 3.1, it is, with probability 1 ? ?
(
1
Q0 k ?r?+ p
if k < kn?
(polynomial decay)
k
?
d?,p (S? , Sn ) ?
(4)
1
0 ? ?r?+ p
?
Q kn
if k ? kn
(plateau)
1/r
1/p
qn
, and Q0 = 3 Q1/r ?(?p ? 1/r)?(1 + 1/r)/?(1/r)
where it is kn? = 9 log(n/?)
.
The above theorem guarantees a drop in d?,p with increasing k, at a rate of k ?r?+1/p , up to k = kn? ,
after which the bound remains constant. The estimated plateau threshold k ? is thus the value of
truncation past which the upper bound does not improve. Note that, as described in Section 5, this
performance drop and plateau behavior is observed in practice.
The proofs of Theorems 3.1 and 3.2 rely on recent non-commutative Bernstein-type inequalities on
operators [5, 23], and a novel analytical decomposition. Note that classical Bernstein inequalities in
Hilbert spaces (e.g. [14]) could also be used instead of [23]. However, while this approach would
simplify the analysis, it produces looser bounds, as described in Section 7.
If we consider an algorithm that produces, for each set of n samples, an estimate S?nk with k ? kn?
then, by plugging the definition of kn? into Eq. 4, we obtain an upper bound on d?,p as a function of
n.
3
Corollary 3.3. Let C have eigenvalue decay rate of order r, and Q0 , kn? be as in Theorem 3.2. Let
S?n? be a truncated subspace estimate S?nk with k ? kn? . It is, with probability 1 ? ?,
1
?? rp
9 (log n ? log ?)
?
0
?
d?,p (S? , Sn ) ? Q
qn
Remark 3.4. Note that, by setting k = n, the above corollary also provides guarantees on the rate
of convergence of the empirical estimate Sn = span(Xn ) to S? , of order
1 !
?? rp
log n ? log ?
d?,p (S? , Sn ) = O
.
n
Corollary 4.1 and remark 3.4 are valid for all n such that kn? ? n (or equivalently such that
nr?1 (log n ? log ?) ? q/9). Note that, because ? is supported on the unit ball, its covariance
has eigenvalues no greater than one, and therefore it must be q < 1. It thus suffices to require that
n > 3 to ensure the condition kn? ? n to hold.
4
Applications of subspace learning
We describe next some of the main uses of subspace learning in the literature.
4.1
Kernel PCA and embedding methods
One of the main applications of subspace learning is in reducing the dimensionality of the input.
In particular, one may find nested subspaces of dimension 1 ? k ? n that minimize the distances from the original to the projected samples. This procedure is known as the Karhunen-Lo`eve,
PCA, or Hotelling transform [12], and has been generalized to Reproducing-Kernel Hilbert Spaces
(RKHS) [18].
In particular, the above procedure amounts to computing an eigen-decomposition of the empirical
covariance (Sec. 2.1):
n
X
?i ui ? ui ,
Cn =
i=1
where the k-th subspace estimate is S?nk := Ran Cnk = span{ui : 1 ? i ? k}. Note that, in the
general case of kernel PCA, we assume the samples {xi }1?i?n to be in some RKHS H, which are
obtained from the observed variables (z1 , . . . , zn ) ? Z n , for some space Z, through an embedding
xi := ?(zi ). Typically, due to the very high dimensionality of H, we may only have indirect
information about ? in the form a kernel function K : Z ? Z ? R: a symmetric, positive definite
function satisfying K(z, w) = h?(z), ?(w)iH [20] (for technical reasons, we also assume K to be
continuous). Note that every such K has a unique associated RKHS, and viceversa [20, p. 120?121],
whereas, given K, the embedding ? is only unique up to an inner product-preserving transformation.
Given a point z ? Z, we can make use of K to compute the coordinates of the projection of its
embedding ?(z) onto S?nk ? H by means of a simple k-truncated eigen-decomposition of Kn .
It is easy to see that the k-truncated kernel PCA subspace S?nk minimizes the empirical reconstruction
? among all subspaces S? of dimension k. Indeed, it is
error dR (S?n , S),
? = Ex???kx ? P ? (x)k2H = Ex??? (I ? P ? )x, (I ? P ? )x
dR (S?n , S)
S
S
S
H
(5)
= Ex??? I ? PS? , x ? x
= I ? PS? , Cn
,
HS
HS
where h?, ?iHS is the Hilbert-Schmidt inner product, from which it is easy to see that the kdimensional subspace minimizing Equation 5 (alternatively maximizing < PS? , Cn >) is spanned
by the k-top eigenvectors of Cn .
? error of the
Since we are interested in the expected dR (S? , S?nk ) (rather than the empirical dR (S?n , S))
kernel PCA estimate, we may obtain a learning rate for Equation 5 by particularizing Theorem 3.2
4
to the reconstruction error, for all k (Theorem 3.2), and for k ? k ? with a suitable choice of k ?
(Corollary 4.1). In particular, recalling that dR (S? , ?) = d?,p (S? , ?)2 with ? = 1/2 and p = 2,
and choosing a value of k ? kn? that minimizes the bound of Theorem 3.2, we obtain the following
result.
Corollary 4.1 (Performance of PCA / Reconstruction error). Let C have eigenvalue decay rate of
order r, and S?n? be as in Corollary 3.3. Then it holds, with probability 1 ? ?,
1?1/r !
log
n
?
log
?
?
dR (S? , S?n ) = O
n
where the dependence on ? is hidden in the Landau symbol.
4.2
Support estimation
The problem of support estimation consists in recovering the support M of a distribution ? on
a metric space Z from identical and independent samples Zn = (zi )1?i?n . We briefly recall a
recently proposed approach to support estimation based on subspace learning [9], and discuss how
our results specialize to this setting, producing a qualitative improvement to theirs.
Given a suitable reproducing kernel K on Z (with associated feature map ?), the support M can
be characterized in terms of the subspace S? = span ?(M ) ? H [9]. More precisely, letting
dV (x) = kx ? PV xkH be the point-subspace distance to a subspace V , it can be shown (see [9])
that, if the kernel separates 1 M , then it is
M = {z ? Z | dS? (?(z)) = 0}.
? = {z ? Z | d ? (?(z)) ? ? } of M , where S? = span ?(Zn ),
This suggests an empirical estimate M
S
? ) = 0 in the Hausdorff
and ? > 0. With this choice, almost sure convergence limn?? dH (M, M
distance [1] is related to the convergence of S? to S? [9]. More precisely, if the eigenfunctions of the
covariance operator C = Ez?? [?(z) ? ?(z)] are uniformly bounded, then it suffices for Hausdorff
convergence to bound from above d r?1 ,? (where r > 1 is the eigenvalue decay rate of C). The
2r
following results specializes Corollary 3.3 to this setting.
Corollary 4.2 (Performance of set learning). If 0 ? ? ? 21 , then it holds, with probability 1 ? ?,
?
log n ? log ?
?
?
d?,? (S? , Sn ) = O
n
where the constant in the Landau symbol depends on ?.
1.0
Figure 1: The figure shows the experimental behavior of the distance d?,? (S?k , S? ) between the
empirical and the actual support subspaces, with
respect to the regularization parameter. The setting is the one of section 5. Here the actual subspace is analytically computed, while the empirical one is computed on a dataset with n = 1000
and 32bit floating point precision. Note the numerical instability as k tends to 1000.
0.8
0.6
0.4
0.2
0.0 0
10
10
1
10
2
k
10
3
r?1
above yields a high probability bound of order O n? 2r (up to logarithmic
r?1
factors), which is considerably sharper than the bound O n? 2(3r?1) found in [8] (Theorem 7).
Letting ? =
r?1
2r
1
A kernel is said to separate M if its associated feature map ? satisfies ??1 (span ?(M )) = M (e.g. the
Abel kernel is separating).
5
Note that these are upper bounds for the best possible choice of k (which minimizes the bound).
While the optima of both bounds vanish with n ? ?, their behavior is qualitatively different. In
particular, the bound of [8] is U-shaped, and diverges for k = n, while ours is L-shaped (no tradeoff), and thus also convergent for k = n. Therefore, when compared with [8], our results suggest
that no regularization is required from a statistical point of view though, as clarified in the following
remark, it may be needed for purposes of numerical stability.
Remark 4.3. While, as proven in Corollary 4.2, regularization is not needed from a statistical
perspective, it can play a role in ensuring numerical stability in practice. Indeed, in order to find
? , we compute d ? (?(z)) with z ? Z. Using the reproducing property of K, it can be shown that,
M
S
D
E
? k )? tz where (tz )i = K(z, zi ), K
? n is the Gram
for z ? Z, it is d ?k (?(z)) = K(z, z) ? tz , (K
S
n
? n )ij = K(zi , zj ), K
? k is the rank-k approximation of K
? n , and (K
? k )? is the pseudo-inverse
matrix (K
n
n
k
? n . The computation of M
? therefore requires a matrix inversion, which is prone to instability for
of K
high condition numbers. Figure 1 shows the behavior of the error that results from replacing S? by
its k-truncated approximation S?k . For large values of k, the small eigenvalues of S? are used in the
inversion, leading to numerical instability.
5
Experiments
Figure 2: The spectrum of the empirical covariance (left), and the expected distance from a random
sample to the empirical k-truncated kernel-PCA subspace estimate (right), as a function of k (n =
1000, 1000 trials shown in a boxplot). Our predicted plateau threshold kn? (Theorem 3.2) is a good
estimate of the value k past which the distance stabilizes.
In order to validate our analysis empirically, we consider the following experiment. Let ? be a
uniform one-dimensional distribution in the unit interval. We embed ? into a reproducing-kernel
Hilbert space H using the exponential of the `1 distance (k(u, v) = exp{?ku ? vk1 }) as kernel.
Given n samples drawn from ?, we compute its empirical covariance in H (whose spectrum is
plotted in Figure 2 (left)), and truncate its eigen-decomposition to obtain a subspace estimate S?nk , as
described in Section 2.1.
Figure 2 (right) is a box plot of reconstruction error dR (S? , S?nk ) associated with the k-truncated
kernel-PCA estimate S?nk (the expected distance in H of samples to S?nk ), with n = 1000 and varying
k. While dR is computed analytically in this example, and S? is fixed, the estimate S?nk is a random
variable, and hence the variability in the graph. Notice from the figure that, as pointed out in [6] and
discussed in Section 6, the reconstruction error dR (S? , S?nk ) is always a non-increasing function of k,
0
due to the fact that the kernel-PCA estimates are nested: S?nk ? S?nk for k < k 0 (see Section 2.1). The
graph is highly concentrated around a curve with a steep intial drop, until reaching some sufficiently
high k, past which the reconstruction (pseudo) distance becomes stable, and does not vanish. In our
experiments, this behavior is typical for the reconstruction distance and high-dimensional problems.
Due to the simple form of this example, we are able to compute analytically the spectrum of the
true covariance C. In this case, the eigenvalues of C decay as 2?/((k?)2 + ? 2 ), with k ? N, and
therefore they have a polynomial decay rate r = 2 (see Section 3). Given the known spectrum decay
rate, we can estimate the plateau threshold k = kn? in the bound of Theorem 3.2, which can be seen
6
to be a good approximation of the observed start of a plateau in dR (S? , S?nk ) (Figure 2, right). Notice
that our bound for this case (Corollary 4.1) similarly predicts a steep performance drop until the
threshold k = kn? (indicated in the figure by the vertical blue line), and a plateau afterwards.
6
Discussion
Figure 3 shows a comparison of our learning rates with existing rates in the literature [6, 19]. The
plot shows the polynomial decay rate c of the high probability bound dR (S? , S?nk ) = O(n?c ), as a
function of the eigenvalue decay rate r of the covariance C, computed at the best value kn? (which
minimizes the bound).
0.8
0.6
0.4
0.2
4
6
8
r
10
Figure 3: Known upper bounds for the polynomial decay rate c (for the best choice of k), for
the expected distance from a random sample to the empirical k-truncated kernel-PCA estimate,
as a function of the covariance eigenvalue decay rate (higher is better). Our bound (purple line),
consistently outperforms previous ones [19] (black line). The top (dashed) line [6], has significantly
stronger assumptions, and is only included for completeness.
The learning rate exponent c, under a polynomial eigenvalue decay assumption of the data covaris(r?1)
r?1
ance C, is c = r?s+sr
for [6] and c = 2r?1
for [19], where s is related to the fourth moment. Note
that, among the two (purple and black) that operate under the same assumptions, our bound (purple
line) is the best by a wide margin. The top, best performing, dashed line [6] is obtained for the best
possible fourth-order moment constraint s = 2r, and is therefore not a fair comparison. However, it
is worth noting that our bounds perform almost as well as the most restrictive one, even when we do
not include any fourth-order moment constraints.
Choice of truncation parameter k. Since, as pointed out in Section 2.1, the subspace estimates S?nk
0
are nested for increasing k (i.e. S?nk ? S?nk for k < k 0 ), the distance d?,p (S? , S?nk ), and in particular
the reconstruction error dR (S? , S?nk ), is a non-increasing function of k. As has been previously discussed [6], this suggests that there is no tradeoff in the choice of k. Indeed, the fact that the estimates
S?nk become increasing close to S? as k increases indicates that, when minimizing d?,p (S? , S?nk ), the
best choice is the highest: k = n.
Interestingly, however, both in practice (Section 5), and in theory (Section 3), we observe that a typical behavior for the subspace learning problem in high dimensions (e.g. kernel PCA) is that there is
a certain value of k = kn? , past which performance plateaus. For problems such as spectral embedding methods [22, 10, 25], in which a degree of dimensionality reduction is desirable, producing an
estimate S?nk where k is close to the plateau threshold may be a natural parameter choice: it leads to
an estimate of the lowest dimension (k = kn? ), whose distance to the true S? is almost as low as the
best-performing one (k = n).
7
7
Sketch of the proofs
Due to the novelty of the the techniques employed, and in order to clarify how they may be used
in other contexts, we provide here a proof of our main theoretical result, Theorem 3.1, with some
details omitted in the interest of conciseness.
For each ? > 0, we denote by r? (x) := 1{x > ?} the step function with a cut-off at ?. Given
an empirical covariance operator Cn , we will consider the truncated version r? (Cn ) where, in this
notation, r? is applied to the eigenvalues of Cn , that is, r? (Cn ) has the same eigen-structure as Cn ,
but its eigenvalues that are less or equal to ? are clamped to zero.
In order to prove the bound of Equation (3), we begin by proving a more general upper bound of
d?,p (S? , S?nk ), which is split into a random (A), and a deterministic part (B, C). The bound holds for
all values of a free parameter t > 0, which is then constrained and optimized in order to find the
(close to) tightest version of the bound.
Lemma 7.1. Let t > 0, 0 ? ? ? 1/2, and ? = ?k (C) be the k-th top eigenvalue of C, it is,
1
1
?
d?,p (S? , S?nk ) ? k(C + tI) 2 (Cn + tI)? 2 k2?
? kC ? (C + tI)?? kp
? ? {3/2(? + t)}
{z
}
|
{z
}
|
{z
}
|
A
B
(6)
C
Note that the right-hand side of Equation (6) is the product of three terms, the left of which (A)
involves the empirical covariance operator Cn , which is a random variable, and the right two (B, C)
are entirely deterministic. While the term B has already been reduced to the known quantities t, ?, ?,
the remaining terms are bound next. We bound the random term A in the next Lemma, whose proof
makes use of recent concentration results [23].
Lemma 7.2 (Term A). Let 0 ? ? ? 1/2, for each n9 log n? ? t ? kCk? , with probability 1 ? ? it
is
1
1
?
(2/3)? ? k(C + tI) 2 (Cn + tI)? 2 k2?
? ?2
Lemma 7.3 (Term C). Let C be a symmetric, bounded, positive semidefinite linear operator on H.
If ?k (C) ? f (k) for k ? N, where f is a decreasing function then, for all t > 0 and ? ? 0, it holds
?
C (C + tI)??
? inf gu? t?u?
(7)
p
0?u?1
where gu? = f (1)u?p +
and ?p > ?, then it holds
R?
1
1/p
f (x)u?p dx
. Furthermore, if f (k) = gk ?1/? , with 0 < ? < 1
?
C (C + tI)??
? Qt??/p
p
1/p
where Q = (g ? ?(?p ? ?)?(1 + ?)/?(?))
(8)
.
The combination of Lemmas 7.1 and 7.2 leads to the main theorem 3.1, which is a probabilistic
bound, holding for every k ? {1, . . . , n}, with a deterministic term kC ? (C + tI)?? kp that depends
on knowledge of the covariance C. In cases in which some knowledge of the decay rate of C is
available, Lemma 7.3 can be applied to obtain Theorem 3.2 and Corollary 3.3. Finally, Corollary 4.1
is simply a particular case for the reconstruction error dR (S? , ?) = d?,p (S? , ?)2 , with ? = 1/2, p =
2.
As noted in Section 3, looser bounds would be obtained if classical Bernstein inequalities in
Hilbert spaces [14] were used instead. In particular, Lemma 7.2 would result in a range for t of
qn?r/(r+1) ? t ? kCk? , implying k ? = O(n1/(r+1) ) rather than O(n1/r ), and thus Theorem 3.2
would become (for k ? k ? ) d?,p (S? , Snk ) = O(n??r/(r+1)+1/(p(r+1)) ) (compared with the sharper
O(n??+1/rp ) of Theorem 3.2). For instance, for p = 2, ? = 1/2, and a decay rate r = 2 (as
in the example of Section 5), it would be: d1/2,2 (S? , Sn ) = O(n?1/4 ) using Theorem 3.2, and
d1/2,2 (S? , Sn ) = O(n?1/6 ) using classical Bernstein inequalities.
Acknowledgments L. R. acknowledges the financial support of the Italian Ministry of Education,
University and Research FIRB project RBFR12M3AC.
8
References
[1] G. Beer. Topologies on Closed and Closed Convex Sets. Springer, 1993.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural computation, 15(6):1373?1396, 2003.
[3] Y. Bengio, O. Delalleau, N.L. Roux, J.F. Paiement, P. Vincent, and M. Ouimet. Learning eigenfunctions
links spectral embedding and kernel pca. Neural Computation, 16(10):2197?2219, 2004.
[4] Y. Bengio, J.F. Paiement, and al. Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral
clustering. Advances in neural information processing systems, 16:177?184, 2004.
[5] S. Bernstein. The Theory of Probabilities. Gastehizdat Publishing House, Moscow, 1946.
[6] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component analysis.
Machine Learning, 66(2):259?294, 2007.
[7] I. Borg and P.J.F. Groenen. Modern multidimensional scaling: Theory and applications. Springer, 2005.
[8] Ernesto De Vito, Lorenzo Rosasco, and al. Learning sets with separating kernels. arXiv:1204.3573, 2012.
[9] Ernesto De Vito, Lorenzo Rosasco, and Alessandro Toigo. Spectral regularization for support estimation.
Advances in Neural Information Processing Systems, NIPS Foundation, pages 1?9, 2010.
[10] D.L. Donoho and C. Grimes. Hessian eigenmaps: Locally linear embedding techniques for highdimensional data. Proceedings of the National Academy of Sciences, 100(10):5591?5596, 2003.
[11] J. Ham, D.D. Lee, S. Mika, and B. Sch?olkopf. A kernel view of the dimensionality reduction of manifolds.
In Proceedings of the twenty-first international conference on Machine learning, page 47. ACM, 2004.
[12] I. Jolliffe. Principal component analysis. Wiley Online Library, 2005.
[13] Andreas Maurer and Massimiliano Pontil. K?dimensional coding schemes in hilbert spaces. IEEE Transactions on Information Theory, 56(11):5839?5846, 2010.
[14] Iosif Pinelis. Optimum bounds for the distributions of martingales in banach spaces. The Annals of
Probability, pages 1679?1706, 1994.
[15] J.R. Retherford. Hilbert Space: Compact Operators and the Trace Theorem. London Mathematical
Society Student Texts. Cambridge University Press, 1993.
[16] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290(5500):2323?2326, 2000.
[17] L.K. Saul and S.T. Roweis. Think globally, fit locally: unsupervised learning of low dimensional manifolds. The Journal of Machine Learning Research, 4:119?155, 2003.
[18] B. Sch?olkopf, A. Smola, and K.R. M?uller. Kernel principal component analysis. Artificial Neural
Networks-ICANN?97, pages 583?588, 1997.
[19] J. Shawe-Taylor, C. K. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum of the gram matrix
and the generalization error of kernel-pca. Information Theory, IEEE Transactions on, 51(7), 2005.
[20] I. Steinwart and A. Christmann. Support vector machines. Information science and statistics. SpringerVerlag. New York, 2008.
[21] J. Sun, S. Boyd, L. Xiao, and P. Diaconis. The fastest mixing markov process on a graph and a connection
to a maximum variance unfolding problem. SIAM review, 48(4):681?699, 2006.
[22] J.B. Tenenbaum, V. De Silva, and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000.
[23] J.A. Tropp. User-friendly tools for random matrices: An introduction. 2012.
[24] K.Q. Weinberger and L.K. Saul. Unsupervised learning of image manifolds by semidefinite programming.
In Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 2, pages II?988. IEEE, 2004.
[25] K.Q. Weinberger and L.K. Saul. Unsupervised learning of image manifolds by semidefinite programming.
International Journal of Computer Vision, 70(1):77?90, 2006.
[26] C.K.I. Williams. On a connection between kernel pca and metric multidimensional scaling. Machine
Learning, 46(1):11?19, 2002.
9
| 4945 |@word h:2 trial:1 briefly:1 inversion:2 polynomial:6 norm:2 stronger:1 version:2 closure:1 covariance:16 decomposition:4 q1:1 tr:1 reduction:5 moment:3 contains:3 rkhs:3 ours:1 interestingly:1 past:4 existing:2 outperforms:1 dx:1 must:3 written:1 numerical:4 drop:4 plot:2 alone:1 implying:1 core:1 provides:1 completeness:1 clarified:1 mathematical:1 become:3 borg:1 qualitative:1 consists:1 specialize:1 prove:1 introduce:1 firb:1 notably:1 expected:4 indeed:4 behavior:6 brain:1 globally:1 decreasing:1 landau:2 actual:2 increasing:6 becomes:1 begin:4 estimating:2 bounded:3 notation:3 project:1 lowest:1 minimizes:5 developed:1 ps2:1 finding:1 transformation:1 guarantee:2 pseudo:2 every:3 multidimensional:3 ti:9 friendly:1 k2:3 unit:4 producing:2 positive:2 local:2 tends:1 encoding:1 black:2 mika:1 suggests:2 fastest:1 factorization:1 range:2 practical:1 unique:2 acknowledgment:1 practice:3 definite:1 ance:1 procedure:2 pontil:1 empirical:16 significantly:2 projection:2 viceversa:1 boyd:1 regular:1 suggest:1 onto:2 close:4 operator:12 context:1 instability:3 zwald:1 map:2 deterministic:3 maximizing:1 williams:2 convex:1 roux:1 canas:1 spanned:2 financial:1 embedding:15 stability:2 proving:1 coordinate:2 annals:1 play:1 user:1 programming:2 us:1 satisfying:1 recognition:1 cut:1 predicts:1 observed:3 role:1 sun:1 highest:1 ran:4 alessandro:3 vanishes:1 ham:1 complexity:1 ui:3 abel:1 cristianini:1 vito:2 gu:2 indirect:1 iit:1 massimiliano:1 describe:1 london:1 kp:5 artificial:1 choosing:1 whose:4 cvpr:1 say:1 delalleau:1 statistic:2 niyogi:1 think:1 transform:1 itself:1 online:1 eigenvalue:19 analytical:1 reconstruction:12 product:4 remainder:1 mixing:1 roweis:2 academy:1 cnk:3 adjoint:1 description:1 kh:1 validate:1 olkopf:2 convergence:5 p:6 optimum:2 diverges:1 produce:2 wider:1 derive:2 pinelis:1 ij:1 qt:1 eq:2 recovering:1 predicted:1 involves:1 christmann:1 met:1 education:1 require:1 suffices:2 particularized:1 generalization:1 extension:2 clarify:1 hold:9 sufficiently:2 around:1 exp:1 k2h:1 stabilizes:1 smallest:3 omitted:2 purpose:1 estimation:9 tool:3 unfolding:2 minimization:1 mit:2 clearly:2 uller:1 always:2 aim:1 rather:2 reaching:1 pn:1 varying:1 corollary:12 derived:1 improvement:1 consistently:1 rank:1 indicates:1 nn:1 typically:2 hidden:1 kc:3 italian:1 interested:1 among:2 groenen:1 denoted:1 exponent:1 constrained:1 special:2 equal:1 shaped:2 ernesto:2 identical:1 broad:1 unsupervised:4 simplify:1 belkin:1 modern:1 diaconis:1 kandola:1 national:1 floating:1 n1:3 recalling:1 interest:5 highly:1 grime:1 semidefinite:3 istituto:2 maurer:1 taylor:1 plotted:1 theoretical:1 instance:2 cover:1 zn:3 applicability:1 addressing:1 intial:1 uniform:1 eigenmaps:5 kn:19 considerably:1 international:2 siam:1 sequel:1 probabilistic:2 off:1 lee:1 kdimensional:1 rosasco:3 dr:20 cognitive:1 tz:3 leading:1 de:3 sec:6 coding:2 student:1 blanchard:1 depends:3 view:2 closed:2 start:1 contribution:2 minimize:1 purple:3 variance:2 yield:1 vincent:1 worth:1 plateau:9 whenever:1 definition:2 lcsl:1 associated:6 di:3 proof:7 conciseness:1 dataset:1 massachusetts:1 recall:1 knowledge:5 dimensionality:8 hilbert:10 higher:1 formulation:1 though:1 box:1 furthermore:1 smola:1 langford:1 until:2 sketch:2 d:1 hand:1 steinwart:1 replacing:1 tropp:1 nonlinear:2 quality:3 indicated:1 true:3 isomap:2 hausdorff:2 regularization:4 analytically:3 hence:1 q0:3 symmetric:2 self:1 noted:1 criterion:5 generalized:1 theoretic:1 silva:1 image:2 novel:4 recently:1 common:2 empirically:1 volume:1 banach:1 discussed:4 approximates:1 theirs:1 cambridge:1 similarly:2 pointed:3 shawe:1 dot:1 stable:1 pu:1 own:1 recent:5 perspective:1 lrosasco:1 inf:1 certain:1 inequality:4 rbfr12m3ac:1 postponed:1 preserving:1 seen:1 greater:1 ministry:1 employed:1 novelty:1 dashed:2 ii:1 encompass:1 full:1 afterwards:1 reduces:1 desirable:1 technical:3 characterized:2 xkh:1 equally:1 plugging:1 laplacian:2 ensuring:1 vision:2 metric:7 arxiv:1 kernel:32 robotics:1 whereas:1 interval:1 limn:1 sch:2 operate:1 sr:1 sure:1 guilledc:1 induced:1 eigenfunctions:2 eve:1 noting:1 bernstein:5 split:1 easy:3 concerned:1 bengio:2 fit:1 zi:4 topology:1 inner:2 andreas:1 cn:16 tradeoff:2 qj:2 pca:31 hessian:2 york:1 remark:4 clear:1 eigenvectors:2 amount:1 locally:4 tenenbaum:1 concentrated:1 reduced:1 zj:1 notice:2 estimated:2 blue:1 paiement:2 express:1 kck:2 key:1 threshold:5 drawn:4 clarity:1 graph:3 inverse:1 fourth:3 family:3 almost:3 looser:2 gastehizdat:1 scaling:3 genova:1 bit:1 entirely:1 bound:33 rudi:2 convergent:1 precisely:2 constraint:2 boxplot:1 flat:1 bousquet:1 speed:1 span:9 performing:2 separable:2 according:1 truncate:1 ball:3 combination:1 kakp:1 dv:1 equation:6 remains:1 previously:1 discus:2 jolliffe:1 needed:3 letting:3 ouimet:1 italiano:2 toigo:1 available:2 tightest:1 observe:1 snk:1 spectral:12 hotelling:1 schmidt:1 weinberger:2 eigen:4 rp:3 original:3 n9:2 top:7 remaining:1 ensure:1 include:1 clustering:1 publishing:1 moscow:1 restrictive:1 universita:1 classical:5 society:1 already:1 quantity:1 concentration:1 dependence:2 md:3 traditional:1 nr:1 said:1 subspace:48 distance:15 separate:2 link:1 schatten:1 separating:2 manifold:6 eigenspectrum:1 reason:1 studi:1 assuming:1 minimizing:2 equivalently:1 steep:2 sharper:3 holding:1 gk:1 trace:1 negative:1 unknown:1 perform:2 twenty:1 upper:5 vertical:1 markov:1 finite:1 supporting:1 truncated:12 defining:1 incorporated:1 variability:1 reproducing:4 sharp:1 required:1 z1:1 optimized:1 connection:2 established:1 decreasingly:1 nip:1 able:1 pattern:1 including:3 max:1 suitable:4 natural:3 rely:3 scheme:1 improve:1 technology:2 lorenzo:3 brief:1 library:1 idempotent:1 carried:1 specializes:1 acknowledges:1 sn:8 text:1 review:1 understanding:1 literature:2 geometric:1 embedded:1 proven:1 foundation:1 degree:1 beer:1 xiao:1 lo:1 prone:1 guillermo:1 summary:1 supported:2 keeping:1 truncation:2 free:1 side:1 lle:1 understand:1 institute:2 wide:2 saul:4 sparse:1 curve:1 dimension:7 xn:3 valid:1 gram:2 qn:3 made:1 qualitatively:1 projected:1 ihs:1 transaction:2 compact:1 uncentered:1 global:1 conclude:1 xi:4 degli:1 alternatively:2 spectrum:4 continuous:1 learn:1 ku:1 icann:1 main:7 bounding:1 fair:1 martingale:1 wiley:1 precision:1 pv:2 explicit:1 exponential:1 lie:1 clamped:1 house:1 vanish:2 theorem:20 embed:2 specific:1 symbol:2 decay:19 evidence:1 ih:1 importance:1 commutative:1 karhunen:1 kx:2 nk:36 margin:1 logarithmic:1 vk1:1 simply:1 ez:1 contained:2 ordered:1 applies:1 springer:2 nested:5 satisfies:1 dh:1 acm:1 donoho:1 replace:1 considerable:1 springerverlag:1 included:1 tecnologia:2 typical:3 reducing:1 uniformly:1 principal:4 lemma:7 called:1 experimental:2 meaningful:1 highdimensional:1 support:18 d1:3 ex:5 |
4,360 | 4,946 | Least Informative Dimensions
Fabian H. Sinz
Department for Neuroethology
Eberhard Karls University T?ubingen
[email protected]
Anna St?ockl
Department for Functional Zoology
Lund University, Sweden
[email protected]
Jan Grewe
Department for Neuroethology
Eberhard Karls University T?ubingen
[email protected]
Jan Benda
Department for Neuroethology
Eberhard Karls University T?ubingen
[email protected]
Abstract
We present a novel non-parametric method for finding a subspace of stimulus features that contains all information about the response of a system. Our method
generalizes similar approaches to this problem such as spike triggered average,
spike triggered covariance, or maximally informative dimensions. Instead of maximizing the mutual information between features and responses directly, we use
integral probability metrics in kernel Hilbert spaces to minimize the information
between uninformative features and the combination of informative features and
responses. Since estimators of these metrics access the data via kernels, are easy
to compute, and exhibit good theoretical convergence properties, our method can
easily be generalized to populations of neurons or spike patterns. By using a particular expansion of the mutual information, we can show that the informative
features must contain all information if we can make the uninformative features
independent of the rest.
1
Introduction
An important aspect of deciphering the neural code is to determine those stimulus features populations of sensory neurons are most sensitive to. Approaches to that problem include white noise analysis [2, 14], in particular spike-triggered average [4] or spike-triggered covariance [3, 19], canonical
correlation analysis or population receptive fields [12], generalized linear models [18, 15], or maximally informative dimensions [22]. All these techniques have in common that they optimize a
statistical dependency measure between stimuli and spike responses over the choice of a linear subspace. The particular algorithms differ in the dimensionality of the subspace they extract (one- vs.
multi-dimensional), the statistical measure they use (correlation, likelihood, relative entropy), and
whether an extension to population responses is feasible or not. While spike-triggered average uses
correlation and is restricted to a single subspace, spike-triggered covariance and canonical correlation analysis can already extract multi-dimensional subspaces but are still restricted to second-order
statistics. Maximally informative dimensions is the only technique of the above that can extract
multiple dimensions that are informative also with respect to higher-order statistics. However, an
extension to spike patterns or population responses is not straightforward because of the curse of dimensionality. Here we approach the problem from a different perspective and propose an algorithm
that can extract a multi-dimensional subspace containing all relevant information about the neural
responses Y in terms of Shannon?s mutual information (if such a subspace exists). Our method
does not commit to a particular parametric model, and can easily be extended to spike patterns or
population responses.
1
In general, the problem of finding the most informative subspace of the stimuli X about the responses Y can be described as finding an orthogonal matrix Q (a basis for Rn ) that separates X
>
into informative and non-informative features (U , V ) = QX. Since Q is orthogonal, the mutual
information I [X : Y ] between X and Y can be decomposed as [5]
p (U , V , Y )
I [Y : X] = I [Y : U , V ] = EX,Y log
p (U , V ) p (Y )
p (Y , V | U )
= I [Y : U ] + EY ,V log
p (Y | U ) p (V | U )
= I [Y : U ] + EU [I [Y | U : V | U ]] .
(1)
Since the two terms on the right hand side of equation (1) are always positive and sum up to the
mutual information between Y and X, two ways to obtain maximally informative features U about
Y would be to either maximize I [Y : U ] or to minimize EU [I [Y |U : V |U ]] via the choice of Q.
The first possibility is along the lines of maximally informative dimensions [22] and involves direct
estimation of the mutual information. The second possibility which avoids direct estimation has
been proposed by Fukumizu and colleagues [5, 6] (we discuss both in Section 3). Here, we explore
a third possibility, which trades practical advantages against a slightly more restrictive objective. The
idea is to obtain maximally informative features U by making V as independent as possible from
the combination of U and Y . For this reason, we name our approach least informative dimensions
(LID). Formally, least informative dimensions tries to minimize the mutual information between the
pair Y , U and V . Using the chain rule for multi information we can write it as (see supplementary
material)
I [Y , U : V ]
= I [Y : X] + I [U : V ] ? I [Y : U ] .
(2)
This means that minimizing I [Y , U : V ] is equivalent to maximizing I [Y : U ] while simultaneously minimizing I [U : V ]. Note that I [Y , U : V ] = 0 implies I [U : V ] = 0. Therefore, if Q
can be chosen such that I [Y , U : V ] = 0 equation (2) reduces to I [Y : X] = I [Y : U ], pushing
all information about Y into U .
Since each new choice of Q requires the estimation of the mutual information between (potentially
high-dimensional) variables, direct optimization is hard or unfeasible. For this reason, we resort to
another dependency measure which is easier to estimate but shares its minimum with mutual information, that is, it is zero if and only if the mutual information is zero. The objective is to choose Q
such that (Y , U ) and V are independent in that dependency measure. If we can find such a Q, then
we know that I [Y , U : V ] is zero as well, which means that U are the most informative features in
terms of the Shannon mutual information. This will allow us to obtain maximally informative features without ever having to estimate a mutual information. The easier estimation procedure comes
at the cost of only being able to link the alternative dependency measure to the mutual information
if both of them are zero. If there is no Q that achieves this, we will still get informative features in
the alternative measure, but it is not clear how informative they are in terms of mutual information.
2
Least informative dimensions
This section describes how to efficiently find a Q such that I [Y , U : V ] = 0 (if such a Q exists).
>
Unless noted otherwise, (U , V ) = QX where U denotes the informative and V the uninformative features. The mutual information is a special case of the relative entropy
log p (X)
DKL [p || q] = EX?p
log q (X)
between two distribution p and q. While being linked to the rich theoretical background of Shannon
information theory, the relative entropy is known to be hard to estimate [25]. Alternatives to relative
entropy of increasing practical interest are the integral probability metrics (IPM), defined as [25, 17]
?F (X : Z)
=
sup |EX [f (X)] ? EZ [f (Z)]| .
(3)
f ?F
Intuitively, the metric in equation (3) searches for a function f , which can detect a difference in
the distributions of two random variables X and Z. If no such witness function can be found, the
2
distributions must be equal. If F is chosen to be a sufficiently rich reproducing kernel Hilbert space
H [21], then the supremum in equation (3) can be computed explicitly and the divergence can be
computed in closed form [7]. This particular type of IPM is called maximum mean discrepancy
(MMD) [9, 7, 10].
A kernel k : X ? X ? R is a symmetric function such that the matrix Kij = k (xi , xj ) is positive
(semi)-definite for every selection of points x1 , ..., xm ? X [21]. In that case, the functions k (?, x)
are elements of a reproducing kernel Hilbert space (RKHS) of functions H. This space is endowed
with a dot product h?, ?iH with the so called reproducing property hk (?, x) , f iH = f (x) for f ? H.
In particular, hk (?, x) , k (?, x0 )iH = k (x, x0 ). When setting F in equation (3) to be the unit ball in
H, then the IPM can be computed in closed form as the norm of the difference between the mean
functions in H [7, 10, 8, 26]:
?H (X : Z)
=
=
kEX [k (?, X)] ? EZ [k (?, Z)]kH
12
EX,X 0 k X, X 0 ? 2EX,Z [k (X, Z)] + EZ,Z 0 k Z, Z 0
,
(4)
where the first equality is derived in [7], and second equality uses the bi-linearity of the dot product
and the reproducing property of k. Furthermore, (X, X 0 ) ? PX ? PX and (Z, Z 0 ) ? PZ ? PZ are
two independent random variables drawn from the marginal distributions of X and Z, respectively.
The function EX [k (?, X)] is an embedding of the distribution of X into the RKHS H via
X 7? EX [k (?, X)]. If this map is injective, that is, if it uniquely represents the probability distribution of X, then equation (4) is zero if and only if the probability distributions of X and X 0 are the
same. Kernels with
that property
are called characteristic in analogy to the characteristic function
?X (t) 7? EX exp it> X [26, 27]. This means that for characteristic kernels MMD is zero
exactly if the relative entropy DKL [pkq] is zero as well. Since the mutual information is the relative
entropy between the joint distribution and the products of the marginals, we can use MMD to search
for a Q such that ?H (PY ,U ,V : PY ,U ? PV ) is zero1 , which then implies that I [Y , U : V ] = 0
as well. The finite sample version of (4) is simply given by replacing the expectations with the
empirical mean (and possibly some bias correction) [7, 10, 8]. The estimation of ?H therefore only
involves summation over three kernel matrices and can be done in a few lines of code. Unlike for
the relative entropy, the empirical estimation of MMD is therefore much more feasible. Further?
more, the residual error of the empirical estimator can be shown to decrease on the order of 1/ m
where m is the number of data points [25]. Note in particular, that this rate does not depend on the
dimensionality of the data.
Objective function The objective function for our optimization problem now has the following
form: We transform input examples
xi into features ui and v i via (ui , v i ) = Qxi . Then we use a
kernel k (ui , v i , y i ) , uj , v j , y j to compute and minimize MMD with respect to the choice of
Q. In order to do that efficiently, a few adaptations are required. First, without loss of generality, we
minimize the squared MMD instead of MMD itself
2
?H
(Z 1 , Z 2 ) = EZ 1 ,Z 01 k Z 1 , Z 01 ? 2EZ 1 ,Z 2 [k (Z 1 , Z 2 )] + EZ 2 ,Z 02 k Z 2 , Z 02 , (5)
where Z 1 = (Y , U , V ) ? PY ,U ,V and Z 2 = (Y , U , V ) ? PY ,U ? PV .
Second, in order to get samples
from PY ,U ? PV ,we assume that our kernel takes the form
k (ui , v i , y i ) , uj , v j , y j = k1 (ui , y i ) , uj , y j ? k2 (v i , v j ). For this special case, one can
incorporate the independence assumption between U , Y and V directly by using the fact that for
independent random variables, the expectation of the product is equal to the product of expectations,
that is,
E k1 (ui , y i ) , uj , y j ? k2 (v i , v j ) = E k1 (ui , y i ) , uj , y j
E [k2 (v i , v j )] .
This special case of MMD is equivalent to the Hilbert-Schmidt Independence Criterion (HSIC)
[9, 23] and can be computed as
1
2
??hs
(6)
=
2 tr (K1 HK2 H) ,
(m ? 1)
m
where K1 and K2 denote the matrices of pairwise kernel values between the data sets {(ui , y i )}i=1
m
and {v i }i=1 , respectively, and Hij = ?ij ? m?1 .
1
With some abuse of notation, we wrote MMD as a function of the probability measures.
3
Note, however, that one could in principle also optimize (5) for a non-factorizing kernel by simply
shuffling the (ui , y i ) and v i across examples. We can also use shuffling to assess whether the
2
optimal value ??hs
found during the optimization is significantly different from zero by comparing
2
the value to a null distribution over ??hs
obtained from datasets where the (ui , y i ) and v i have been
permuted across examples.
Minimization procedure and gradients For optimizing (6) with respect to Q we use gradient
descent over the orthogonal group SO(n). The optimization can be carried out by computing the
unconstrained gradient ?Q ? of the objective function with respect to Q (treating Q as an ordinary
matrix), projecting that gradient onto the tangent space of SO (n), and performing a line search
along the gradient direction. We now present the necessary formulae to implement the optimization
in a modular fashion. We first show how to compute the gradient ?Q ? in terms of the gradients
2
2
?ui ,vi ??hs
, then we show how to compute the ?ui ,vi ??hs
in terms of derivatives of kernel functions,
and finally demonstrate how the formulae change when approximating the kernel matrices with an
incomplete Cholesky decomposition.
Given the unconstrained gradient ?Q ? the projection onto the tangent space is given by ? =
Q?Q ? > Q ? ?Q ? [13, eq. (22)]. The function is then minimized by performing a line-search
along ? (Q + t?), where ? is the projection onto SO (n) which can easily be computed via singular
value decomposition of Q + t? and setting the singular values to one [13, prop. 7].
This means that all we need for the gradient descent on SO(n) is the unconstrained gradient ?Q ?.
This gradient takes the form of a sum of outer products [16, eq. (20)]
m
2
2
X
??
?hs
??
?hs
2
>
?Q ??hs
=
? x>
,
=
J
?,
J
=
i
? (ui , v i )
? (ui , v i ) i
i=1
where the matrix ? contains the stimuli xi in its rows.
(u)
The first k columns J? corresponding to the dimension of the features ui and the last n?k columns
J (v) corresponding to the dimension of the features v i are given by
2
2
(u)>
(v)
(v)>
J?(u) =
diag
HK
HD
and
J
=
diag
HK
HD
,
2
1
?
?
?
2
2
(m ? 1)
(m ? 1)
where
D?(u)
=
ij
?
k (ui , v i , y i ) , uj , v j , y j
?ui?
ij
th
contains the partial derivatives of the kernel with respect to the ? dimension of u (and analogously
for v) in the first argument (see supplementary material for the derivation).
Efficient implementation with incomplete Cholesky decomposition of the kernel matrix So
far, the evaluation of HSIC requires the computation of two m ? m kernel matrices in each step. For
larger datasets this can quickly become computationally prohibitive. In order to speed up computation time, we approximate the kernel matrices by an incomplete Cholesky decomposition K = LL> ,
where L ? Rm?` is a ?tall? matrix [1]. In that case, HSIC can be computed much faster as the trace
of a product of two ` ? ` matrices because
2
> 2
tr (K1 HK2 H) = tr L>
1 H L2 L2 H L1 ,
where HLk can be efficiently computed by centering Lk on its row mean. Also in this case, the
matrix J can be computed efficiently in terms of derivatives of sub-matrices of the kernel matrix
(see supplementary material for the exact formulae).
3
Related work
Kernel dimension reduction in regression [5, 6] Fukumizu and colleagues find maximally informative features U by minimizing EU [I [V | U : Y | U ]] in equation (1) via conditional kernel
4
covariance operators. They show that the covariance operator equals zero if and only if Y is conditionally independent of V given U , that is, Y ??V | U . In that case, U carries all information
about Y . Although their approach is closest to ours, it differs in a few key aspects: In contrast to our
approach, their objective involves the inversion of a?potentially large?kernel matrix which needs
additional regularization in order to be invertible. A conceptual difference is that we are optimizing
a slightly more restrictive problem because their objective does not attempt to make U independent
of V as well. However, this will not make a difference in many practical cases, since many stimulus
distributions are Gaussian for which the dependencies between U and V can be removed by prewhitening the stimulus data before training LID. In that case I [U : V ] = 0 for every choice of Q
and equation (2) becomes equivalent to maximizing the mutual information between U and Y . The
advantage of our formulation of the problem is that it allows us to detect and quantify independence
by comparing the current ??hs to its null distribution obtained by shuffling the (y i , ui ) against v i
across examples. This is hardly possible in the conditional case. Also note that for spherically symmetric data I [U : V ] = const. for every choice of Q. In that case equation (2) becomes equivalent
to maximizing I [Y : U ]. However, a residual redundancy remains which would show up when
2
comparing ??hs
to its null distribution. Finally, the use of kernel covariance operators is bound to
kernels that factorize. In principle, our method is also applicable to non-factorizing kernels if we use
?H instead of ?hs and obtain the samples from the product distribution of PY ,U ? PV via shuffling.
Maximally informative
dimensions
[22]
Sharpee and colleagues maximize the relative entropy
Ispike = DKL p v > s|spike || p v > s between the distribution of stimuli projected onto informative dimensions given a spike, to the marginal distribution of the projection. This relative entropy
is the part of the mutual information which is carried by the arrival of a single spike, since
I v > s : {spike, no spike} = p (spike) ? Ispike + p (no spike) Ino spike .
Their method is also completely non-parametric and captures higher order dependencies between
a stimulus and a single spike. However, by focusing on single spikes and the spike triggered density only, it neglects the dependencies between spikes and the information carried by the silence
of the neuron [28]. Additionally, the generalization to spike patterns or population responses is
non-trivial
between the projected stimuli and spike patterns $ 1 , ..., $ ` be becausethe information
P
comes I v >
s
:
$
=
p
($
)
?
I$i . This requires the estimation of a conditional distribution
i
i
>
p v s|$ i for each pattern $ i which can quickly become prohibitive when the number of patterns
grows exponentially.
4
Experiments
In all the experiments below, we demonstrate the validity of our methods on controlled artificial
examples and on P-unit recordings from electric fish. We use an RBF kernel on the v i and a tensor
RBF kernel on the (ui , y i ):
!
> 2
kui y >
kv i ? v j k2
i ? uj y j k
k (v i , v j ) = exp ?
and k (ui , y i ) , uj , y j = exp ?
.
?2
?2
The derivatives of the kernels can be found in the supplementary material. Unless noted otherwise
the ? were chosen to be the median of pairwise Euclidean distances between data points. In all
artificial experiments, Q was chosen randomly.
Linear Non-Linear Poisson Model (LNP) In this experiment, we trained LID on a simple linear
nonlinear Poisson (LNP) neuron yi ? Poisson bhw, xi i ? ?c+ with an exponentially decaying
filter and a rectifying non-linearity (see Figure 1, left). We used m = 5000 data points xi from
a 20-dimensional standard normal distribution N (0, I) as input. The offset was chosen such that
approximately 35% non-zero spike counts in the yi were obtained. We used one informative and 19
non-informative dimensions, and set ? = 1 for the tensor kernel.
After optimization, the
h first dimension q 1 of Q converged
i to the filter w (Figure 1). We compared
the HSIC values ??hs {(y i , ui )}i=1,...,m : {v i }i=1,...,m before and after the optimization to their
null distribution obtained by shuffling. Before the optimization, the dependence of (Y , U ) and V
5
Figure 1: Left: LNP Model. The informative dimension (gray during optimization, black after optimization) converges to the true filter of an LNP model (blue line). Before optimization (Y , U ) and
V are dependent as shown by the left inset (null distribution obtained via shuffling in gray, dashed
line shows actual HSIC value). After the optimization (right inset) the HSIC value is even below
the null distribution. Right: Two state neuron. LID correctly identifies the subspace (blue dashed)
in which the two true filters (solid black) reside since projections of the filters on the subspace (red
dashed) closely resemble the original filters.
is correctly detected (Figure 1, left, insets). After convergence the actual HSIC value lies left to the
null distribution?s domain. Since the appropriate test for independence would be one-sided, the null
hypothesis ?(Y , U ) is independent of V ? would not be rejected in this case.
Two state neuron In this experiment, we simulated a neuron with two states that were both attained in 50% of the trials (see Figure 1, right). This time, the output consisted of four ?bins?
whose statistics varied depending on the state. In the first?steady rate?state, the four bins contained spike counts drawn from an LNP neuron with exponentially decaying filter as above. In the
second?burst?state, the first two bins were drawn from Poisson distribution with a fixed base rate
independent of the stimulus. The second two bins were drawn from an LNP neuron with a modulated exponential filter and higher gain. We used m = 8000 input stimuli from a 20-dimensional
standard normal distribution. We use two informative dimensions and set ? of the tensor kernel to
two times the median of the pairwise distances. LID correctly identified the subspace associated
with the two filters also in this case (Figure 1, right).
Artificial complex cell In a second experiment, we estimated the two-dimensional subspace associated with a artificial complex cell. We generated a quadrature pair w1 and w2 of two 10dimensional filters (see Figure 2, left). We used m = 8000 input points from a standard normal distribution. Responses were generated from a Poisson distribution with the rate given by
2
2
?i = hw1 , xi i + hw2 , xi i . This led to about 34% non-zero neural responses. When using two
informative subspaces, LID was able to identify the subspace correctly (Figure 2, left). When comparing the HSIC value against the null distribution found via shuffling, the final value indicated no
further dependencies. When only a one-dimensional subspace was used (Figure 2, right), LID did
not converge to the correct subspace. Importantly, the HSIC value after optimization was clearly
outside the support of the null distribution, thereby correctly indicating residual dependencies.
P-Unit recordings from weakly electric fish Finally, we applied our method to P-unit recordings
from the weakly electric fish Eigenmannia virescens. These weakly electric fish generate a dipolelike electric field which changes polarity with a frequency at about 300Hz. Sensors in the skin of the
fish are tuned to this carrier frequency and respond to amplitude changes caused by close-by objects
with different conductive properties than water [20]. In the present recordings, the immobilized fish
was stimulated with 10s of 300 ? 600Hz low-pass filtered full field frozen Gaussian white noise
amplitude modulations of its own field. Neural activity was recorded intra-cellularly from the P-unit
afferents.
Spikes were binned with 1ms precision. We selected m = 8400 random time points in the spike
response and the corresponding preceding 20ms of the input (20 dimensions). We used the same
6
Figure 2: Artificial Complex Cell. Left: The original filters are 90? phase shifted Gabor filters
which form an orthogonal basis for a two-dimensional subspace. After optimization, the two informative dimensions of LID (first two rows of Q) converge to that subspace and also form a pair of
90? phase shifted filters (note that even if the filters are not the same, they span the same subspace).
Comparing the HSIC values before and after optimization shows that this subspace contains the
relevant information (left and right inset). Right: If only a one-dimensional informative subspace
is used, the filter only slightly converges to the subspace. After optimization, a comparison of the
HSIC value to the null distribution obtained via shuffling indicates residual dependencies which are
not explained by the one-dimensional subspace (left and right inset).
Figure 3: Most informative feature for a weakly electric fish P-Unit: A random filter (blue trace)
exhibits HSIC values that are clearly outside the domain of the null distribution (left inset). Using
the spike triggered average (red trace) moves the HSIC values of the first feature of Q already inside
the null distribution (middle inset). Further optimization with LID refines the feature (black trace)
and brings the HSIC values closer to zero (right inset). After optimization, the informative feature
U is independent of the features V because the first row and column of the covariance matrix of the
transformed Gaussian input show no correlations. The fact that one informative feature is sufficient
to bring the HSIC values inside the null distribution indicates that a single subspace captures all
information conveyed by these sensory neurons.
kernels as in the experiment on the LNP model. We initialized the first row in Q with the normalized spike triggered average (STA; Figure 3, left, red trace). We neither pre-whitened the data for
computing the STA nor for the optimization of LID. Unlike a random feature (Figure 3, left, blue
trace), the spike triggered average already achieves HSIC values within the null distribution (Figure
3, left and middle inset). The most informative feature corresponding to U looks very similar to the
STA but shifts the HSIC value deeper into the domain of the null distribution (Figure 3, right inset).
7
This indicates that one single subspace in the input is sufficient to carry all information between the
input and the neural response.
5
Discussion
Here we presented a non-parametric method to estimate a subspace of the stimulus space that contains all information about a response variable Y . Even though our method is completely generic
and applicable to arbitrary input-output pairs of data, we focused on the application in the context of sensory neuroscience. The advantage of the generic approach is that Y can in principle be
anything from spike counts, to spike patterns or population responses. Since our method finds the
most informative dimensions by making the complement of those dimensions as independent from
the data as possible, we termed it least informative dimensions (LID). We use the Hilbert-Schmidt
independence criterion to minimize the dependencies between the uninformative features and the
combination of informative features and outputs. This measure is easy to implement, avoids the
need to estimate mutual information, and its estimator has good convergence properties independent
of the dimensionality of the data. Even though our approach only estimates the informative features
and not mutual information itself, it can help to estimate mutual information by reducing the number
of dimensions.
As in the approach by Fukumizu and colleagues, it might be that no Q exists such that
I [Y , U : V ] = 0. In that situation, the price to pay for an easier measure is that it is hard to
make definite statements about the informativeness of the features U in terms of the Shannon information, since ?H = I [Y , U : V ] = 0 is the point that connects ?H to the mutual information. As
demonstrated in the experiments, we can detect this case by comparing the actual value of ??H to an
empirical null distribution of ??H values obtained by shuffling the v i against the ui , y i pairs. However, if ?H 6= 0, theoretical upper bounds on the mutual information are unfortunately not available.
2
In fact, using results from [25] and Pinsker?s inequality one can show that ?H
bounds the mutual
information from below. One might now be tempted to think that maximizing ?H [Y , U ] might be a
better way to find informative features. While this might be a way to get some informative features
[24], it is not possible to link the features to informativeness in terms of Shannon mutual information, because the point that builds the bridge between the two dependency measures is where both
of them are zero. Anywhere else the bound may not be tight so the maximally informative features
in terms of ?H and in terms of mutual information can be different.
Another problem our approach shares with many algorithms that detect higher-order dependencies
is the non-convexity of the objective function. In practice, we found that the degree to which this
poses a problem very much depends on the problem at hand. For instance, while the subspaces of
the LNP or the two state neuron were detected reliably, the two dimensional subspace of the artificial
complex cell seems to pose a harder problem. It is likely that the choice of kernel has an influence
on the landscape of the objective function. We plan to explore this relationship in more detail in the
future. In general, a good initialization of Q helps to get close to the global optimum.
Beyond that, however, integral probability metric approaches to maximally informative dimensions
offer a great chance to avoid many problems associated with direct estimation of mutual information,
and to extend it to much more interesting output structures than single spikes.
Acknowledgements
Fabian Sinz would like to thank Lucas Theis and Sebastian Gerwinn for helpful discussions and comments
on the manuscript. This study is part of the research program of the Bernstein Center for Computational
Neuroscience, T?ubingen, funded by the German Federal Ministry of Education and Research (BMBF; FKZ:
01GQ1002).
References
[1] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. In Proceedings of
the 22nd international conference on Machine learning - ICML ?05, pages 33?40, New York, New York,
USA, 2005. ACM Press.
[2] E. D. Boer and P. Kuyper. Triggered Correlation, 1968.
8
[3] N. Brenner, W. Bialek, and R. De Ruyter Van Steveninck. Adaptive rescaling maximizes information
transmission. Neuron, 26(3):695?702, 2000.
[4] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Comput. Neural
Syst, 12:199?213, 2001.
[5] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality Reduction for Supervised Learning with
Reproducing Kernel Hilbert Spaces. Journal of Machine Learning Research, 5(1):73?99, 2004.
[6] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimension reduction in regression. Annals of Statistics,
37(4):1871?1905, 2009.
[7] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two sample
problem. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing
Systems 19, pages 513?-520, Cambridge, MA, 2007. MIT Press.
[8] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?olkopf, and A. Smola. A Kernel Two-Sample Test.
Journal of Machine Learning Research, 13:723?773, 2012.
[9] A. Gretton, O. Bousquet, A. Smola, and B. Sch?olkopf. Measuring Statistical Dependence with HilbertSchmidt Norms. In S. Jain, H. U. Simon, and E. Tomita, editors, Advances in Neural Information Processing Systems, pages 63?77. Springer Berlin / Heidelberg, 2005.
[10] A. Gretton, K. Fukumizu, Z. Harchaoui, and B. K. Sriperumbudur. A Fast, Consistent Kernel Two-Sample
Test. In Y Bengio, D Schuurmans, J Lafferty, C K I Williams, and A Culotta, editors, Advances in Neural
Information Processing Systems, pages 673?681. Curran, Red Hook, NY, USA, 2009.
[11] J. D. Hunter. Matplotlib: A 2D graphics environment. Computing In Science & Engineering, 9(3):90?95,
2007.
[12] J. Macke, G. Zeck, and M. Bethge. Receptive Fields without Spike-Triggering. Advances in Neural
Information Processing Systems 20, pages 1?8, 2007.
[13] J. H. Manton. Optimization algorithms exploiting unitary constraints. Signal Processing, IEEE Transactions on, 50(3):635?650, 2002.
[14] P. Z. Marmarelis and K. Naka. White-noise analysis of a neuron chain: an application of the Wiener
theory. Science, 175(27):1276?1278, 1972.
[15] P McCullagh and J A Nelder. Generalized Linear Models, Second Edition. Chapman and Hall, 1989.
[16] T. P. Minka. Old and New Matrix Algebra Useful for Statistics. MIT Media Lab Note, pages 1?19, 2000.
[17] A. M?uller. Integral Probability Metrics and Their Generating Classes of Functions. Advances in Applied
Probability, 29(2):429?443, 1997.
[18] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15(4):243?262, 2004.
[19] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: an information-theoretic
generalization of spike-triggered average and covariance analysis. Journal of Vision, 6(4):414?428, 2006.
[20] H. Scheich, T. H. Bullock, and R. H Hamstra. Coding properties of two classes of afferent nerve fibers:
high-frequency electroreceptors in the electric fish, Eigenmannia. Journal of Neurophysiology, 36(1):39?
60, 1973.
[21] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, volume 98 of Adaptive computation and machine learning. MIT Press, 2001.
[22] T. Sharpee, N. C. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative dimensions. Neural Computation, 16(2):223?250, 2004.
[23] A. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert Space Embedding for Distributions. In Algorithmic Learning Theory: 18th International Conference, pages 13?31. Springer-Verlag,
Berlin/Heidelberg, 2007.
[24] L. Song, A. Smola, A. Gretton, J. Bedo, and K. Borgwardt. Feature selection via dependence maximization. Journal of Machine Learning Research, 13(May):1393?1434, 2012.
[25] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, and G. R. G. Lanckriet. On Integral Probability Metrics,
phi-divergences and binary classification. Technical Report 1, arXiv, 2009.
[26] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Injective Hilbert Space
Embeddings of Probability Measures. In Proceedings of the 21st Annual Conference on Learning Theory,
number i, pages 111?122. Omnipress, 2008.
[27] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G.R. G. Lanckriet. Hilbert Space
Embeddings and Metrics on Probability Measures. Journal of Machine Learning Research, 11(1):48,
2010.
[28] R. S. Williamson, M. Sahani, and J. W. Pillow. Equating information-theoretic and likelihood-based
methods for neural dimensionality reduction. Technical Report 1, arXiv, 2013.
9
| 4946 |@word h:12 trial:1 neurophysiology:1 version:1 inversion:1 middle:2 norm:2 seems:1 nd:1 covariance:8 decomposition:5 electroreceptors:1 thereby:1 tr:3 solid:1 harder:1 ipm:3 carry:2 reduction:5 contains:5 tuned:1 rkhs:2 ours:1 current:1 comparing:6 must:2 refines:1 informative:44 treating:1 v:1 prohibitive:2 selected:1 filtered:1 fabee:1 along:3 burst:1 direct:4 become:2 inside:2 pairwise:3 x0:2 nor:1 multi:4 decomposed:1 actual:3 curse:1 increasing:1 becomes:2 linearity:2 notation:1 maximizes:1 medium:1 null:17 finding:3 sinz:2 every:3 exactly:1 bedo:1 k2:5 rm:1 platt:1 zeck:1 unit:6 positive:2 before:5 carrier:1 engineering:1 encoding:1 analyzing:1 modulation:1 abuse:1 approximately:1 black:3 might:4 initialization:1 equating:1 bi:1 steveninck:1 practical:3 practice:1 definite:2 implement:2 differs:1 procedure:2 jan:4 empirical:4 significantly:1 gabor:1 projection:4 cascade:1 pre:1 get:4 unfeasible:1 onto:4 selection:2 operator:3 close:2 context:1 influence:1 py:6 optimize:2 equivalent:4 map:1 demonstrated:1 center:1 maximizing:5 straightforward:1 williams:1 focused:1 estimator:3 rule:1 importantly:1 hd:2 population:8 embedding:2 hsic:17 annals:1 exact:1 us:2 curran:1 hypothesis:1 lanckriet:3 element:1 capture:2 culotta:1 eu:3 trade:1 decrease:1 removed:1 environment:1 convexity:1 ui:22 pinsker:1 trained:1 depend:1 weakly:4 tight:1 algebra:1 predictive:1 basis:2 completely:2 easily:3 joint:1 fiber:1 derivation:1 jain:1 fast:1 artificial:6 detected:2 outside:2 whose:1 modular:1 supplementary:4 larger:1 otherwise:2 statistic:5 commit:1 think:1 transform:1 itself:2 final:1 triggered:12 advantage:3 frozen:1 propose:1 product:8 adaptation:1 relevant:2 kh:1 kv:1 olkopf:8 exploiting:1 convergence:3 optimum:1 transmission:1 generating:1 converges:2 object:1 tall:1 depending:1 help:2 pose:2 ij:3 eq:2 involves:3 implies:2 come:2 quantify:1 differ:1 direction:1 resemble:1 rasch:2 closely:1 correct:1 filter:16 material:4 bin:4 education:1 generalization:2 summation:1 extension:2 correction:1 sufficiently:1 hall:1 normal:3 exp:3 great:1 algorithmic:1 achieves:2 estimation:9 applicable:2 sensitive:1 bridge:1 hoffman:1 fukumizu:9 minimization:1 federal:1 clearly:2 sensor:1 always:1 gaussian:3 mit:3 uller:1 neuroethology:3 avoid:1 derived:1 rank:1 likelihood:3 indicates:3 hk:4 contrast:1 detect:4 helpful:1 dependent:1 transformed:1 kex:1 classification:1 lucas:1 plan:1 special:3 mutual:27 marginal:2 field:5 equal:3 having:1 chapman:1 represents:1 look:1 icml:1 discrepancy:1 minimized:1 report:2 stimulus:13 future:1 few:3 sta:3 randomly:1 simultaneously:1 divergence:2 phase:2 connects:1 attempt:1 interest:1 possibility:3 intra:1 evaluation:1 zoology:1 light:1 prewhitening:1 chain:2 integral:5 closer:1 partial:1 injective:2 necessary:1 sweden:1 orthogonal:4 unless:2 incomplete:3 euclidean:1 old:1 initialized:1 theoretical:3 kij:1 column:3 instance:1 measuring:1 maximization:1 ordinary:1 cost:1 deciphering:1 graphic:1 dependency:13 st:2 density:1 eberhard:3 international:2 boer:1 borgwardt:3 invertible:1 analogously:1 quickly:2 bethge:1 w1:1 squared:1 recorded:1 containing:1 choose:1 possibly:1 marmarelis:1 qxi:1 resort:1 derivative:4 macke:1 rescaling:1 syst:1 de:4 coding:1 explicitly:1 caused:1 vi:2 afferent:2 depends:1 try:1 closed:2 lab:1 linked:1 sup:1 red:4 decaying:2 simon:1 rectifying:1 minimize:6 ass:1 wiener:1 characteristic:3 efficiently:4 identify:1 landscape:1 hunter:1 lu:1 converged:1 sebastian:1 centering:1 against:4 sriperumbudur:4 colleague:4 frequency:3 minka:1 naka:1 associated:3 gain:1 dimensionality:7 hilbert:9 amplitude:2 nerve:1 focusing:1 manuscript:1 higher:4 attained:1 supervised:1 response:18 maximally:12 formulation:1 done:1 though:2 generality:1 furthermore:1 rejected:1 anywhere:1 smola:6 correlation:6 hand:2 replacing:1 nonlinear:1 brings:1 gray:2 indicated:1 grows:1 name:1 usa:2 validity:1 contain:1 true:2 consisted:1 normalized:1 equality:2 regularization:2 symmetric:2 spherically:1 white:4 conditionally:1 ll:1 during:2 uniquely:1 noted:2 steady:1 anything:1 criterion:2 generalized:3 m:2 theoretic:2 demonstrate:2 l1:1 bring:1 omnipress:1 karls:3 novel:1 common:1 permuted:1 functional:1 rust:1 exponentially:3 volume:1 extend:1 marginals:1 cambridge:1 shuffling:9 unconstrained:3 dot:2 funded:1 access:1 pkq:1 base:1 closest:1 own:1 perspective:1 optimizing:2 termed:1 verlag:1 ubingen:4 inequality:1 gerwinn:1 binary:1 yi:2 lnp:8 minimum:1 additional:1 ministry:1 preceding:1 ey:1 determine:1 maximize:2 converge:2 signal:2 dashed:3 semi:1 multiple:1 full:1 harchaoui:1 reduces:1 gretton:9 simoncelli:1 technical:2 faster:1 offer:1 bach:3 dkl:3 controlled:1 regression:2 whitened:1 vision:1 metric:8 expectation:3 poisson:5 arxiv:2 kernel:39 mmd:9 cell:4 background:1 uninformative:4 hk2:2 else:1 singular:2 median:2 sch:8 w2:1 rest:1 unlike:2 comment:1 recording:4 hz:2 lafferty:1 jordan:3 unitary:1 bernstein:1 bengio:1 easy:2 embeddings:2 xj:1 independence:5 identified:1 fkz:1 triggering:1 idea:1 shift:1 whether:2 song:2 york:2 hardly:1 useful:1 se:1 clear:1 benda:2 hw1:1 generate:1 canonical:2 fish:8 shifted:2 estimated:1 neuroscience:2 correctly:5 blue:4 write:1 group:1 key:1 redundancy:1 four:2 drawn:4 neither:1 sum:2 respond:1 bound:4 pay:1 annual:1 activity:1 binned:1 hilbertschmidt:1 constraint:1 bousquet:1 aspect:2 speed:1 argument:1 span:1 performing:2 px:2 department:4 combination:3 ball:1 describes:1 slightly:3 across:3 bullock:1 lid:11 making:2 intuitively:1 restricted:2 projecting:1 explained:1 sided:1 computationally:1 equation:9 remains:1 discus:1 count:3 german:1 know:1 generalizes:1 available:1 endowed:1 appropriate:1 generic:2 alternative:3 schmidt:2 original:2 denotes:1 tomita:1 include:1 const:1 pushing:1 neglect:1 restrictive:2 k1:6 uj:8 build:1 approximating:1 tensor:3 objective:9 skin:1 already:3 move:1 spike:36 parametric:4 receptive:2 dependence:3 bialek:2 exhibit:2 gradient:11 subspace:28 distance:2 separate:1 link:2 simulated:1 thank:1 berlin:2 outer:1 tuebingen:2 trivial:1 reason:2 water:1 immobilized:1 code:2 polarity:1 relationship:1 minimizing:3 conductive:1 unfortunately:1 potentially:2 statement:1 hij:1 trace:6 implementation:1 reliably:1 upper:1 neuron:13 datasets:2 fabian:2 finite:1 descent:2 situation:1 extended:1 ever:1 witness:1 ino:1 rn:1 varied:1 reproducing:5 arbitrary:1 complement:1 pair:5 required:1 chichilnisky:1 able:2 beyond:2 below:3 pattern:8 xm:1 lund:1 program:1 ispike:2 natural:1 residual:4 identifies:1 lk:1 grewe:2 carried:3 hook:1 extract:4 sahani:1 l2:2 tangent:2 acknowledgement:1 theis:1 relative:9 loss:1 interesting:1 analogy:1 kuyper:1 degree:1 conveyed:1 gq1002:1 sufficient:2 consistent:1 informativeness:2 principle:3 editor:3 share:2 row:5 last:1 silence:1 side:1 allow:1 bias:1 deeper:1 van:1 dimension:28 avoids:2 rich:2 pillow:2 sensory:3 reside:1 adaptive:2 projected:2 far:1 qx:2 transaction:1 approximate:1 uni:2 supremum:1 wrote:1 global:1 conceptual:1 nelder:1 xi:7 factorize:1 factorizing:2 search:4 additionally:1 stimulated:1 ruyter:1 schuurmans:1 heidelberg:2 expansion:1 kui:1 williamson:1 complex:4 electric:7 domain:3 diag:2 anna:2 did:1 noise:4 arrival:1 edition:1 quadrature:1 x1:1 neuronal:1 fashion:1 ny:1 bmbf:1 precision:1 sub:1 pv:4 exponential:1 comput:1 lie:1 third:1 formula:3 inset:10 offset:1 pz:2 eigenmannia:2 exists:3 ih:3 easier:3 entropy:9 led:1 hlk:1 simply:2 explore:2 likely:1 paninski:1 ez:6 contained:1 phi:1 springer:2 chance:1 acm:1 ma:1 prop:1 conditional:3 rbf:2 tempted:1 price:1 brenner:1 feasible:2 hard:3 change:3 manton:1 mccullagh:1 reducing:1 called:3 pas:1 shannon:5 sharpee:2 indicating:1 formally:1 cholesky:3 support:2 modulated:1 incorporate:1 biol:1 ex:8 |
4,361 | 4,947 | Blind Calibration in Compressed Sensing using
Message Passing Algorithms
?
Christophe Schulke
Univ Paris Diderot, Sorbonne Paris Cit?e,
ESPCI and CNRS UMR 7083
Paris 75005, France
Francesco Caltagirone
Institut de Physique Th?eorique
CEA Saclay and CNRS URA 2306
91191 Gif-sur-Yvette, France
Florent Krzakala
ENS and CNRS UMR 8550,
ESPCI and CNRS UMR 7083
Paris 75005, France
Lenka Zdeborov?a
Institut de Physique Th?eorique
CEA Saclay and CNRS URA 2306
91191 Gif-sur-Yvette, France
Abstract
Compressed sensing (CS) is a concept that allows to acquire compressible signals
with a small number of measurements. As such it is very attractive for hardware
implementations. Therefore, correct calibration of the hardware is a central issue. In this paper we study the so-called blind calibration, i.e. when the training
signals that are available to perform the calibration are sparse but unknown. We
extend the approximate message passing (AMP) algorithm used in CS to the case
of blind calibration. In the calibration-AMP, both the gains on the sensors and the
elements of the signals are treated as unknowns. Our algorithm is also applicable to settings in which the sensors distort the measurements in other ways than
multiplication by a gain, unlike previously suggested blind calibration algorithms
based on convex relaxations. We study numerically the phase diagram of the blind
calibration problem, and show that even in cases where convex relaxation is possible, our algorithm requires a smaller number of measurements and/or signals in
order to perform well.
1
Introduction
The problem of acquiring an N -dimensional signal x through M linear measurements, y = F x,
arises in many contexts. The Compressed Sensing (CS) approach [1, 2] exploits the fact that, in
many cases of interest, the signal is K-sparse (in an appropriate known basis), meaning that only
K = ?N out of the N components are non-zero. Compressed sensing theory shows that a K-sparse
N -dimensional signal can be reconstructed from far less than N linear measurements [1, 2], thus
saving acquisition time, cost or increasing the resolution. In the most common setting, the linear
M ? N map F is considered to be known.
Nowadays, the concept of compressed sensing is very attractive for hardware implementations.
However, one of the main issues when building hardware revolves around calibration. Usually the
sensors introduce a distortion (or decalibration) to the measurements in the form of some unknown
gains. Calibration is about how to determine the transfer function between the measurements and
the readings from the sensor. In some applications dealing with distributed sensors or radars for
instance, the location or intrinsic parameters of the sensors are not exactly known [3, 4]. Similar
distortion can be found in applications with microphone arrays [5]. The need for calibration has
been emphasized in a number of other works, see e.g. [6, 7, 8]. One common way of dealing with
calibration (apart from ignoring it or considering it as measurement noise) is supervised calibration
1
when some known training signals xl , l = 1, . . . , P and the corresponding observations yl are used
to estimate the distortion parameters. Given a sparse signal recovery problem, if we were not able
to previously estimate the distortion parameters via supervised calibration, we will need to estimate
the unknown signal and the unknown distortion parameters simultaneously - this is known as blind
(unsupervised) calibration. If such blind calibration is computationally possible, then it might be
simpler to do than the supervised calibration in practice. The main contribution of this paper is a
computationally efficient message passing algorithm for blind calibration.
1.1
Setting
We state the problem of blind calibration in the following way. First we introduce an unknown
distortion parameter (we will also use equivalently the term decalibration parameter or gain) d? for
each of the sensors, ? = 1, . . . , M . Note that d? can also represent a vector of several parameters.
We consider that the signal is linearly projected by a known M ? N measurement matrix F and
only then distorted according to some known transfer function h. This transfer function can be
probabilistic (noisy), non-linear, etc. Each sensor ? then provides the following distorted and noisy
PN
reading (measure) y? = h(z? , d? , w? ) where z? = i=1 F?i xi . As often in CS, we focus on the
case where the measurement matrix F is iid Gaussian with zero mean. For the measurement noise
w? , one usually considers an iid Gaussian noise with variance ?, which is added to z? .
In order to perform the blind calibration, we need to measure several statistically diverse signals.
Given a set of N -dimensional K-sparse signals xl with l = 1, ? ? ? , P , for each of the signals we
consider M sensor readings
y?l = h(z?l , d? , w?l ) ,
where
z?l =
N
X
F?i xil ,
(1)
i=1
where d? are the signal-independent distortion parameters, w?l is a signal-dependent measurement
noise, and h is an arbitrary known function of these variables with standard regularity requirements.
To illustrate a situation in which one has sample dependent noise w?l and sample independent
distortion d? , consider for instance sound sensors placed in space at positions d? that are not exactly
known. The positions, however, do not change when different sounds are recorded. The noise w?l
is then the ambient noise that is different during every recording.
The final inference problem is hence as follows: Given the M ? P measurements y?l and a perfect
knowledge of the matrix F , we want to infer both the P different signals {x1 , ? ? ? xP } and the M
distortion parameters d? , ? = 1, ? ? ? M . In this work we place ourselves in the Bayesian setting
where we assume the distribution of the signal elements, PX , and the distortion coefficients, PD , to
be known.
1.2
Relation to previous work
As far as we know, the problem of blind calibration was first studied in the context of compressed
sensing in [9] where the distortions were considered as multiplicative, i.e. the transfer function was
h(z?l , d? , w?l ) =
1
(z?l + w?l ) .
d?
(2)
A subsequent work [10] considers a more general case when the distortion parameters are d? =
(g? , ?? ), and the transfer function h(z?l , d? , w?l ) = ei?? (z?l + w?l )/g? . Both [9] and [10] applied
convex optimization based algorithms to the blind calibration problem and their approach seems
to be limited to the above special cases of transfer functions. Our approach is able to deal with a
general transfer function h, and moreover for the product-transfer-function (2) it outperforms the
algorithm of [9].
The most commonly used algorithm for signal reconstruction in CS is the `1 minimization of [1].
In CS without noise and for measurement matrices with iid Gaussian elements, the `1 minimization
algorithm leads to exact reconstruction as long as the measurement rate ? = M/N > ?DT in the
limit of large signal dimension, where ?DT is a well known phase transition of Donoho and Tanner
[11]. The blind calibration algorithm of [9, 10] also directly uses `1 minimization for reconstruction.
2
In the last couple of years, the theory of CS witnessed a large progress thanks to the development of
message passing algorithms based on the standard loopy Belief Propagation (BP) and their analysis
[12, 13, 14, 15, 16]. In the context of compressed sensing, the canonical loopy BP is difficult to implement because its messages would be probability distributions over a continuous support. At the
same time in problems such as compressed sensing, Gaussian or quadratic approximation of BP still
contains the information necessary for a successful reconstruction of the signal. Such approximations of loopy BP originated in works on CDMA multiuser detection [17, 18]. In compressed sensing
the Gaussian approximation of BP is known as the approximate message passing (AMP) [12, 13],
and it was used to prove that with properly designed measurement matrices F the signal can be
reconstructed as long as the number of measurements is larger than the number of non-zero component in the signal, thus closing the gap between the Donoho-Tanner transition and the information
theoretical lower bound [15, 16]. Even without particular design of the measurement matrices the
AMP algorithm outperforms the `1 -minimization for a large class of signals. Importantly for the
present work, [14] generalized the AMP algorithm to deal with a wider range of input and output
functions. For some of those, generalizations of the `1 -minimization based approach are not convex
anymore, and hence they do not have the advantage of provable computational tractability anymore.
The following two works have considered blind calibration related problems with the use of AMPlike algorithms. In [19] the authors use AMP combined with expectation maximization to calibrate
gains that act on the signal components rather than on the measurement components as we consider
here. In [20] the authors study the case when every element of the measurement matrix F has to be
calibrated, in contrast to the row-constant gains considered in this paper. The setting of [20] is much
closer to the dictionary learning problem and is much more demanding, both computationally and
in terms of the number of different signals necessary for successful calibration.
1.3
Contributions
In this work we extend the generalized approximate message passing (GAMP) algorithm of [14]
to the problem of blind calibration with a general transfer function h, eq. (1). We denote it as the
calibration-AMP or Cal-AMP algorithm. The Cal-AMP uses P > 1 unknown sparse signals to learn
both the different signals xl , l = 1, . . . , P , and the distortion parameters d? , ? = 1, . . . , M , of the
sensors. We hence overcome the limitations of the blind calibration algorithm presented in [9, 10]
to the class of settings for which the calibration can be written as a convex optimization problem.
In the second part of this paper we analyze the performance of Cal-AMP for the product transfer
function (2) used in [9] and demonstrate its scalability and better performance with respect to their
`1 -based calibration approach. In the numerical study we observe a sharp phase transition generalizing the phase transition seen for AMP in compressed sensing [21]. Note that for the blind calibration
problem to be solvable, we need the amount of information contained in the sensor readings, P M ,
to be at least as large as the size of the vector of distortion parameters M , plus the number of the
non-zero components of all the signals, KP . Defining ? = K/N and ? = M/N , this leads to
?P ? ?P + ?. If we fix the number of signals P we have a well defined line in the (?, ?)-plane
given by
P
??
? ? ?min ,
(3)
P ?1
below which exact calibration cannot be possible. We will compare the empirically observed phase
transition for blind calibration to this theoretical bound as well as to the phase transition that would
have been observed in the pure CS, i.e. if we knew the distortion parameters.
2
The Calibration-AMP algorithm
The Cal-AMP algorithm is based on a Bayesian probabilistic formulation of the reconstruction
problem. Denoting PX (xil ) the assumed empirical distribution of the components of the signal,
PW (w?l ) the assumed probability distribution of the components of the noise, and PD (d? ) the assumed empirical distribution of the distortion parameters, the Bayes formula yields
N,P
P,M
M
Y
Y Z
1 Y
P (x, d|F, y) =
PX (xil )
PD (d? )
dw?l PW (w?l )? [y?l ? h (z?l , d? , w?l )] ,
Z
?=1
i,l=1
l,?=1
(4)
3
P
where Z is a normalization
constant
and z?l = i F?i xil . We denote the marginals of the signal
R
Q
Q
components ?ilx (xil ) =
? dd?
jn6=il dxjn P (x, d|F, y) and those of the distortion parameRQ
Q
d
?
ters ?? (d? ) =
?6=? dd?
il dxil P (x, d|F, y). The estimators xil that minimizes the expected
?
mean-squared error (MSE) of the signals and the estimator d? of the distortion parameters are the avR
R
erages w.r.t. the marginal distributions, namely x?il = dxil xil ?ilx (xil ) and d?? = dd? d? ??d (d? ).
An exact computation of these estimates is not tractable in any known way so we use instead a
belief-propagation based approximation that has proven to be fast and efficient in the CS problem
[12, 13, 14]. We remind that GAMP, that leads to a considerably simpler inference problem, is recovered if we set PD (d? ) = ?(d? ?1) and that usual AMP is recovered by setting h(z, d, w) = z+w
on top of it.
Figure 1: Graphical model representing the blind calibration problem. Here the dimensionality of
the signal is N = 8, the number of sensors is M = 3, and the number of signals used for calibration
P = 2. The variable nodes xil and d? are depicted as circles, the factor nodes as squares.
Given the factor graph representation of the calibration problem in Fig. 1, the canonical belief propagation equations for the probability measure (4) are written in terms of N P M pairs of messages
m
? ?l?il (xil ) and mil??l (xil ), representing probability distributions on the signal component xil ,
and P M pairs of messages n???l (d? ) and n
? ?l?? (d? ), representing probability distributions on
the distortion parameter d? . Following the lines of [12, 13, 14, 15], with the use of the central
limit theorem, a Gaussian approximation, and neglecting terms that go to zero as N ? ?, the BP
equations can be closed using only the means and variances of the messages mil??l and n???l :
Z
Z
ail??l = dxil mil??l (xil ) xil ,
vil??l = dxil mil??l (xil ) x2il ? a2il??l ,
(5)
Z
Z
2
k???l = dd? n???l (d? ) d? ,
l???l = dd? n???l (d? ) d2? ? k???l
.
(6)
Moreover, again neglecting only terms that go to zero as N ? ?, we can write closed equations on
quantities that correspond to the variables and factors nodes,
P instead of messages running
P 2between
variables and factor nodes. For this we introduce ??l = i F?i ail??l and V?l = i F?i
vil??l .
The derivation of the Cal-AMP algorithm is similar to those in [12, 13, 14, 15]. The resulting
algorithm is in the leading order equivalent to the belief propagation for the factor graph from Fig. 1.
To summarize the resulting algorithm we define
Z
(z??)2
? d, ?, v) = dz dw PW (w) ?[h(z, d, w) ? y] e? 21 v , and
(7)
G(y,
"Z
#
P
Y
?d
?
G(y?? , ??? , V?? , ?) = ln
dd PD (d)
G(y?n , d, ??n , V?n ) e
,
(8)
n=1
where ?? indicates a dependence on all the variables labeled ?n with n = 1, ? ? ? , P , and ?(?) is the
Dirac delta function. Similarly as Rangan in [14], we define P output functions as
?
l
gout
(y?? , ??? , V?? ) =
G(y?? , ??? , V?? , ? = 0) .
(9)
???l
Note that each of the output functions depend on all the P different signals. We also define the
following input functions
fax (?2 , R) = [x]X ,
fcx (?2 , R) = [x2 ]X ? [x]2X ,
(10)
4
where [. . . ]X indicates expectation w.r.t. the measure
MX (x, ?2 , R) =
(x?R)2
1
PX (x) e? 2?2 .
2
Z(? , R)
(11)
Given the above definitions, the iterative calibration-AMP algorithm reads as follows:
X
X
t+1
t+1
t+1 t+1
2 t
V?l
=
F?i
vil ,
??l
=
F?i atil ? V?l
e?l ,
i
et+1
?l
2
(?t+1
il )
=
=
l
t
t
gout
(y?? , ???
, V??
),
"
X
#?1
2
F?i
ht+1
?l
,
? l
t
t+1
g (y?? , ???
, V??
),
???l out
"
#
X
t+1
2
= ail +
F?i e?l (?t+1
il ) ,
ht+1
?l = ?
t+1
Ril
?
at+1
il
=
(12)
i
(13)
(14)
?
t+1
2
fax ((?t+1
il ) , Ril ) ,
t+1
t+1
2
vil
= fcx ((?t+1
il ) , Ril ) ,
(15)
t=0
t=0
we initialize ??l
= y?l , at=0
as the mean and variance of the assumed distribution PX (?),
il and vil
and iterate these equations until convergence. At every time-step the quantity ail is the estimate for
the signal element xil , and vil is the approximate error of this estimate. The estimate and its error
for the distortion parameter d? can be computed as
?2
?
t+1
t+1
t+1
t+1
t+1
t+1
G(y??
, ???
, V??
, ?)
and l?t+1 = 2 G(y??
, ???
, V??
, ?)
. (16)
k?t+1 =
??
??
?=0
?=0
By setting PD (d? ) = ?(d? ? dtrue
? ), and simplifying eq. (8), readers familiar with the work of
Rangan [14] will recognize the GAMP algorithm in eqs. (12-15). Note that for a general transfer
function h the generating function G (8) has to be evaluated numerically. The overall complexity of the Cal-AMP algorithm scales as O(M N P ) and hence shares the scalability advantages of
AMP [12].
2.1
Cal-AMP for the product transfer function
In the numerical section of this paper we will focus on a specific case of the transfer function
h(z?l , d? , w?l ), defined in eq. (2). We consider the measurement noise w?l to be Gaussian of zero
mean and variance ?. This transfer function was considered in the work of [9] and we will hence
be able to compare the performance of Cal-AMP directly to the convex optimization investigated
in [9]. For the product transfer function eq. (2) most integrals requiring a numerical computation in
the general case are expressed analytically and we can replace equations (13) by:
t
k?t y?l ? ??l
,
t +?
V?l
"
#?1
2
X
y?n
t+1 2
,
(C? ) =
t+1
V?n
+?
n
et+1
?l =
k?t+1 = fad ((C?t+1 )2 , T?t+1 ) ,
ht+1
?l =
1
t+1
V?l
+?
T?t+1 = (C?t+1 )2
l?t+1 =
?
2
l?t y?l
t+1
(V?l
+ ?)2
t+1
X y?n ??n
t+1
V?n
+
n
d
t+1 2
t+1
fc ((C? ) , T? ) .
?
,
,
(17)
(18)
(19)
where we have introduced the functions fad and fcd similarly to those in eq. (10), except the expectation is made w.r.t. to the measure
(d?T )2
1
MD (d, C 2 , T ) =
PD (d)|d|P e? 2C 2 .
(20)
2
Z(C , T )
3
Experimental results
Our simulations were performed using a MATLAB implementation of the Cal-AMP algorithm presented in the previous section, that is available online [22]. We focused on the noiseless case ? = 0
for which exact reconstruction is conceivable. We tested the algorithm on randomly generated
Gauss-Bernoulli signals with density of non-zero elements ?, normally distributed around zero with
5
unit variance. For the present experiments the algorithm is using this information via a matching
distribution PX (xil ). The situation when PX mismatches the true signal distribution was discussed
for AMP for compressed sensing in [21].
The distortion parameters?d? were generated from a uniform distribution centered at d = 1 with
variance ? 2 and width 2 3?. This ensures that, as ? 2 ? 0, the results of standard compressed
sensing are recovered, while the distortions are growing with ? 2 . For numerical stability purposes,
the parameter ? 2 used in the update functions of Cal-AMP was taken to be slightly larger than
the variance used to create the actual distortion parameters. For the same reasons, we have also
added a small noise ? = 10?17 and used damping in the iterations in order to avoid oscillatory
behavior. In this noiseless case we iterate the Cal-AMP equations until the following quantity crit =
P
P
2
1
?l (k? y?l ?
i F?i ail ) becomes smaller than the numerical precision of implementation,
MP
around 10?16 , or until that quantity does not decrease any more over 100 iterations.
Success or failure of the reconstruction is usually determined by looking at the mean squared error
(MSE) between the true signal x0l and the reconstructed one al . In the noiseless setting the product
transfer function h leads to a scaling invariance and therefore a better measure of success is the
cross-correlation between real and recovered signal (used in [10]) or a corrected version of the
MSE, defined by:
2
1 X 0
1 X d0?
MSEcorr =
xil ? s?ail , where s? =
(21)
NP
M ? k?
il
is an estimation of the scaling factor s. Slight deviations between empirical and theoretical means
due to the finite size of M and N lead to important differences between MSE and MSEcorr , only
the latter truly going to zero for finite N and M .
P=1
P=2
P = 10
2
2
2
1.5
1.5
1.5
?min
?CS
log (MSEcorr)
?
10
1
1
?4
1
?6
?8
0.5
0.5
0.5
0
0
0
?10
?12
0
0.5
?
1
0
0.5
?
1
0
0.5
?
1
?14
Figure 2: Phase diagrams for different numbers P of calibrating signals: The measurement rate
? = M/N is plotted against the density of the signal ? = K/N . The plotted value is the decimal
logarithm of MSEcorr (21) achieved for one random instance. Black indicates failure of the reconstruction, while white represents perfect reconstruction (i.e. a MSE of the order of the numerical
precision). In this figure the distortion variance is ? 2 = 0.01 and N = 1000. While for P = 1
reconstruction is never possible, for P > 1, there is a phase transition very close to the lower bound
defined by ?min in equation (3) or to the phase transition line of the pure compressed sensing problem ?CS . Note, however, that in the large N limit we expect the calibration phase transition to be
strictly larger than both the ?min and ?CS . Note also that while this diagram is usually plotted only
for ? ? 1 for compressed sensing, the part ? > 1 displays pertinent information in blind calibration.
Fig. 2 shows the empirical phase diagrams in the ?-? plane we obtained from the Cal-AMP algorithm for different number of signals P . For P = 1 the reconstruction is never exact, and effectively
this case corresponds to reconstruction without any attempt to calibrate. For any P > 1, there is
a sharp phase transition taking place with a jump in MSEcorr of ten orders of magnitude. As P
increases, the phase of exact reconstruction gets bigger and tends to the one observed in Bayesian
compressed sensing [15]. Remarkably, for small values of the density ?, the position of the CalAMP phase transition is very close to the CS one already for P = 2 and Cal-AMP performs almost
as well as in the total absence of distortion.
6
?5
Iterations
? log10(MSEcorr) ?
0
?10
600
N
500
50
200
400
1000
5000
10000
400
300
200
100
?15
0.45
0.5
0
0.45
0.55
?
0.5
?
0.55
0.6
Figure 3: Left: Cal-AMP phase transition as the system size N grows. The curves are obtained by
averaging log10 (MSEcorr ) over 100 samples, reflecting the probability of correct reconstruction in
the region close to the phase transition, where it is not guaranteed. Parameters are: ? = 0.2, P = 2,
? 2 = 0.0251. For higher values of N , the phase transition becomes sharper. Right: Mean number
of iterations necessary for reconstruction, when the true signal is successfully recovered. Far from
the phase transition, increasing N does not increase visibly the number of iterations for these system
sizes, showing that our algorithm works in linear time. The number of needed iterations increases
drastically as one approaches the phase transition.
0
10
16
log10(?2)
log10(MSEcorr)
14
0
?0.4
?5
10
12
?1
MSEcorr
?2
10
?5
?10
10
8
?7
6
?12
?min
?15
2
?15
0.25
?10
4
?CS
10
?5
P
?3
0.3
0.35
0.4
?
0.45
0.5
0.55
?4
?3
?2
log10(?2)
?1
Figure 4: Left: Position of the phase transition in ? for different distortion variances ? 2 . The
left vertical line represents the position of the CS phase transition, the right one is the counting
bound eq. (3). With growing distortion, larger measurement rates become necessary for perfect
calibration and reconstruction. Intermediary values of MSEcorr are obtained in a region where
perfect calibration is not possible, but distortions are small enough for the uncalibrated AMP to
make only small mistakes. The parameters here are P = 2 and ? = 0.2. Right: Phase diagram
as the variance of the distortions ? 2 and the number of signals P vary, for ? = 0.5, ? = 0.75 and
N = 1000.
Fig. 3 shows the behavior near the phase transition, giving insights about the influence of the system
size and the number of iterations needed for precise calibration and reconstruction. In Fig. 4, we
show the jump in the MSE on a single instance as the measurement rate ? decreases. The right part
is the phase diagram in the ? 2 -P plane.
In [9, 10], a calibration algorithm using `1 -minimization has been proposed. While in that case, no
assumption on the distribution of the signals and of the the gains is needed, for most practical cases
it is expected to be less performant than the Cal-AMP if these distributions are known or reasonably
approximated. We implemented the algorithm of [9] with MATLAB using the CVX package [23].
Due to longer running times, experiments were made using a smaller system size N = 100. We
also remind at this point that whereas the Cal-AMP algorithm works for a generic transfer function
(1), the `1 -minimization based calibration is restricted to the transfer functions considered by [9, 10].
Fig. 5 shows a comparison of the performances of the two algorithms in the ?-? phase diagrams. The
Cal-AMP clearly outperforms the `1 -minimization in the sense that the region in which calibration
is possible is much larger.
7
P=3
P=5
P = 10
2
2
2
1.5
1.5
1.5
1.5
1
1
1
1
?min
?CS
?DT
?
Cal?AMP
P=2
2
? log10(MSEcorr) ?
0.5
0
?
L1
2
0.5
0
0.5
1
0
2
0.5
0
0.5
1
0
2
?2
0.5
0
0.5
1
0
2
1.5
1.5
1.5
1.5
1
1
1
1
0.5
0.5
0.5
0.5
0
0
0
0
?4
0
0.5
1
?6
?8
?10
?12
?14
0
0.5
?
1
0
0.5
?
1
0
0.5
?
1
0
0.5
?
1
Figure 5: Comparison of the empirical phase diagrams obtained with the Cal-AMP algorithm proposed here (top) and the `1 -minimization calibration algorithm of [9] (bottom) averaged over several
random samples; black indicates failure, white indicates success. The area where reconstruction is
possible is consistently much larger for Cal-AMP than for `1 -minimization-based calibration. The
plotted lines are the phase transitions for CS without unknown distortions with the AMP algorithm
(?CS , in red, from [21]), and with `1 -minimization (the Donoho-Tanner transition ?DT , in blue,
from [11]). The line ?min is the lower counting bound from eq. (3). The advantage of Cal-AMP
over `1 -minimization calibration is clear. Note that in both cases, the region close to the transition is
blurred due to finite system size, hence a region of grey pixels (again, the effect is more pronounced
for the `1 algorithm).
4
Conclusion
We have presented the Cal-AMP algorithm for blind calibration in compressed sensing, a problem
where the outputs of the measurements are distorted by some unknown gains on the sensors, eq. (1).
The Cal-AMP algorithm allows to jointly infer sparse signals and the distortion parameters of each
sensor even with a very small number of signals and is computationally as efficient as the GAMP
algorithm for compressed sensing [14]. Another advantage w.r.t. previous works is that the CalAMP algorithm works for generic transfer function between the measurements and the readings from
the sensor, not only those that permit a convex formulation of the inference problem as in [9, 10]. In
the numerical analysis, we focussed on the case of the product transfer function (2) studied in [9].
Our results show that, for the chosen parameters, calibration is possible with a very small number
of different sparse signals P (i.e. P = 2 or P = 3), even very close to the absolute minimum
of measurements required by a counting bound (3). Comparison with the `1 -minimizing calibration
algorithm clearly shows lower requirements on the measurement rate ? and on the number of signals
P for Cal-AMP. The Cal-AMP algorithm for blind calibration is scalable and simple to implement.
Its efficiency shows that supervised training is unnecessary. We expect Cal-AMP to become useful
in practical compressed sensing implementations.
Asymptotic analysis of AMP can be done using the state evolution approach [12]. In the case
of Cal-AMP, however, analysis of the resulting state evolution equations is more difficult and has
hence been postponed to future work. Future work also includes the study of the robustness to the
mismatch between assumed and true distribution of signal elements and distortion parameters, as
well as the expectation-maximization based learning of the various parameters. Finally, the use
of spatially coupled measurement matrices [15, 16] could further improve the performance of the
algorithm and make the phase transition asymptotically coincide with the information-theoretical
counting bound (3).
8
References
[1] E. J. Cand`es and T. Tao. Decoding by linear programming. IEEE Trans. Inform. Theory, 51:4203, 2005.
[2] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52:1289, 2006.
[3] B. C. Ng and C. M. S. See. Sensor-array calibration using a maximum-likelihood approach. IEEE
Transactions on Antennas and Propagation, 44(6):827?835, 1996.
[4] Z. Yang, C. Zhang, and L. Xie. Robustly stable signal recovery in compressed sensing with structured
matrix perturbation. IEEE Transactions on Signal Processing, 60(9):4658?4671, 2012.
[5] R. Mignot, L. Daudet, and F. Ollivier. Compressed sensing for acoustic response reconstruction: Interpolation of the early part. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
(WASPAA), pages 225?228, 2011.
[6] T. Ragheb, J. N Laska, H. Nejati, S. Kirolos, R. G Baraniuk, and Y. Massoud. A prototype hardware for
random demodulation based compressive analog-to-digital conversion. In 51st Midwest Symposium on
Circuits and Systems (MWSCAS), pages 37?40. IEEE, 2008.
[7] J. A Tropp, J. N. Laska, M. F. Duarte, J. K Romberg, and R. G. Baraniuk. Beyond nyquist: Efficient
sampling of sparse bandlimited signals. IEEE Trans. Inform. Theory, 56(1):520?544, 2010.
[8] P. J. Pankiewicz, T. Arildsen, and T. Larsen. Model-based calibration of filter imperfections in the random
demodulator for compressive sensing. arXiv:1303.6135, 2013.
[9] R. Gribonval, G. Chardon, and L. Daudet. Blind calibration for compressed sensing by convex optimization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages
2713 ? 2716, 2012.
[10] C. Bilen, G. Puy, R. Gribonval, and L. Daudet. Blind sensor calibration in sparse recovery using convex
optimization. In 10th Int. Conf. on Sampling Theory and Applications, 2013.
[11] D. L. Donoho and J. Tanner. Sparse nonnegative solution of underdetermined linear equations by linear
programming. Proc. Natl. Acad. Sci., 102(27):9446?9451, 2005.
[12] D. L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. Proc.
Natl. Acad. Sci., 106(45):18914?18919, 2009.
[13] D.L. Donoho, A. Maleki, and A. Montanari. Message passing algorithms for compressed sensing: I.
motivation and construction. In IEEE Information Theory Workshop (ITW), pages 1 ?5, 2010.
[14] S. Rangan. Generalized approximate message passing for estimation with random linear mixing. In Proc.
of the IEEE Int. Symp. on Inform. Theory (ISIT), pages 2168 ?2172, 2011.
[15] F. Krzakala, M. M?ezard, F. Sausset, Y.F. Sun, and L. Zdeborov?a. Statistical physics-based reconstruction
in compressed sensing. Phys. Rev. X, 2:021005, 2012.
[16] D. L. Donoho, A. Javanmard, and A. Montanari. Information-theoretically optimal compressed sensing
via spatial coupling and approximate message passing. In Proc. of the IEEE Int. Symposium on Information Theory (ISIT), pages 1231?1235, 2012.
[17] J. Boutros and G. Caire. Iterative multiuser joint decoding: Unified framework and asymptotic analysis.
IEEE Trans. Inform. Theory, 48(7):1772?1793, 2002.
[18] Y. Kabashima. A cdma multiuser detection algorithm on the basis of belief propagation. J. Phys. A: Math.
and Gen., 36(43):11111, 2003.
[19] U. S. Kamilov, A. Bourquard, E. Bostan, and M. Unser. Autocalibrated signal reconstruction from linear
measurements using adaptive gamp. online preprint, 2013.
[20] F. Krzakala, M. M?ezard, and L. Zdeborov?a. Phase diagram and approximate message passing for blind
calibration and dictionary learning. ISIT 2013, arXiv:1301.5898, 2013.
[21] F. Krzakala, M. M?ezard, F. Sausset, Y.F. Sun, and L. Zdeborov?a. Probabilistic reconstruction in compressed sensing: Algorithms, phase diagrams, and threshold achieving matrices. J. Stat. Mech., P08009,
2012.
[22] http://aspics.krzakala.org/.
[23] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.0 beta.
http://cvxr.com/cvx, 2012.
9
| 4947 |@word version:2 pw:3 seems:1 grey:1 d2:1 simulation:1 simplifying:1 contains:1 denoting:1 amp:43 outperforms:3 multiuser:3 recovered:5 com:1 written:2 numerical:7 subsequent:1 pertinent:1 designed:1 update:1 plane:3 gribonval:2 provides:1 math:1 node:4 location:1 compressible:1 org:1 simpler:2 zhang:1 become:2 symposium:2 beta:1 prove:1 symp:1 introduce:3 krzakala:5 theoretically:1 javanmard:1 expected:2 behavior:2 cand:1 growing:2 actual:1 considering:1 increasing:2 becomes:2 moreover:2 circuit:1 gif:2 minimizes:1 ail:6 compressive:2 unified:1 every:3 act:1 exactly:2 normally:1 unit:1 grant:1 tends:1 limit:3 mistake:1 acad:2 interpolation:1 might:1 plus:1 umr:3 black:2 studied:2 revolves:1 limited:1 range:1 statistically:1 averaged:1 practical:2 practice:1 implement:2 mech:1 area:1 empirical:5 matching:1 boyd:1 get:1 cannot:1 close:5 cal:27 romberg:1 context:3 influence:1 equivalent:1 map:1 dz:1 go:2 convex:10 focused:1 resolution:1 recovery:3 pure:2 estimator:2 insight:1 array:2 importantly:1 dw:2 stability:1 construction:1 exact:6 programming:3 us:2 element:7 approximated:1 labeled:1 observed:3 bottom:1 preprint:1 region:5 ensures:1 sun:2 decrease:2 uncalibrated:1 pd:7 complexity:1 ezard:3 radar:1 depend:1 crit:1 efficiency:1 basis:2 icassp:1 joint:1 various:1 derivation:1 univ:1 fast:1 kp:1 larger:6 distortion:32 compressed:27 jointly:1 noisy:2 antenna:1 final:1 online:2 advantage:4 reconstruction:22 product:6 gen:1 mixing:1 pronounced:1 dirac:1 scalability:2 convergence:1 regularity:1 requirement:2 xil:18 generating:1 perfect:4 wider:1 illustrate:1 coupling:1 stat:1 progress:1 eq:9 implemented:1 c:18 diderot:1 correct:2 filter:1 centered:1 fix:1 generalization:1 isit:3 underdetermined:1 strictly:1 around:3 considered:6 dictionary:2 vary:1 early:1 purpose:1 estimation:2 proc:4 intermediary:1 applicable:1 create:1 successfully:1 minimization:12 clearly:2 sensor:18 gaussian:7 imperfection:1 rather:1 pn:1 avoid:1 mil:4 focus:2 properly:1 consistently:1 bernoulli:1 indicates:5 likelihood:1 visibly:1 contrast:1 sense:1 duarte:1 inference:3 dependent:2 cnrs:5 relation:1 x0l:1 going:1 france:4 tao:1 pixel:1 issue:2 overall:1 development:1 spatial:1 special:1 initialize:1 laska:2 marginal:1 saving:1 never:2 ng:1 sampling:2 represents:2 unsupervised:1 future:2 np:1 randomly:1 simultaneously:1 recognize:1 familiar:1 phase:30 ourselves:1 attempt:1 detection:2 interest:1 message:16 gout:2 physique:2 truly:1 bilen:1 natl:2 ambient:1 nowadays:1 closer:1 neglecting:2 necessary:4 integral:1 institut:2 damping:1 logarithm:1 circle:1 plotted:4 theoretical:4 instance:4 witnessed:1 maximization:2 loopy:3 cost:1 tractability:1 calibrate:2 deviation:1 uniform:1 successful:2 gamp:5 considerably:1 combined:1 calibrated:1 thanks:1 density:3 st:1 international:1 caire:1 probabilistic:3 yl:1 physic:1 decoding:2 tanner:4 squared:2 central:2 recorded:1 again:2 conf:1 leading:1 de:2 includes:1 coefficient:1 blurred:1 int:3 mp:1 blind:25 multiplicative:1 performed:1 closed:2 analyze:1 red:1 bayes:1 atil:1 contribution:2 il:11 square:1 variance:10 yield:1 correspond:1 bayesian:3 iid:3 vil:6 kabashima:1 oscillatory:1 inform:5 phys:2 lenka:1 distort:1 definition:1 failure:3 against:1 waspaa:1 acquisition:1 larsen:1 couple:1 gain:8 knowledge:1 dimensionality:1 reflecting:1 higher:1 dt:4 supervised:4 xie:1 response:1 disciplined:1 formulation:2 evaluated:1 done:1 until:3 correlation:1 tropp:1 ei:1 propagation:6 grows:1 building:1 effect:1 calibrating:1 concept:2 requiring:1 true:4 evolution:2 hence:7 ril:3 analytically:1 read:1 spatially:1 maleki:2 deal:2 attractive:2 white:2 during:1 width:1 generalized:3 sausset:2 demonstrate:1 performs:1 l1:1 meaning:1 common:2 empirically:1 extend:2 discussed:1 slight:1 analog:1 numerically:2 marginals:1 measurement:30 similarly:2 closing:1 calibration:58 stable:1 longer:1 etc:1 apart:1 success:3 kamilov:1 christophe:1 itw:1 dtrue:1 postponed:1 seen:1 minimum:1 espci:2 determine:1 signal:59 sound:2 infer:2 d0:1 cross:1 long:2 demodulation:1 bigger:1 scalable:1 noiseless:3 expectation:4 demodulator:1 arxiv:2 iteration:7 represent:1 normalization:1 achieved:1 whereas:1 want:1 remarkably:1 diagram:10 unlike:1 recording:1 ura:2 near:1 counting:4 yang:1 enough:1 iterate:2 florent:1 eorique:2 prototype:1 nyquist:1 speech:1 passing:11 matlab:3 useful:1 clear:1 amount:1 ten:1 hardware:5 cit:1 http:2 canonical:2 massoud:1 delta:1 blue:1 diverse:1 write:1 threshold:1 achieving:1 ht:3 ollivier:1 graph:2 relaxation:2 asymptotically:1 year:1 package:1 baraniuk:2 distorted:3 place:2 almost:1 reader:1 cvx:3 sorbonne:1 scaling:2 bound:7 guaranteed:1 display:1 quadratic:1 nonnegative:1 rangan:3 bp:6 x2:1 software:1 min:7 px:7 structured:1 according:1 smaller:3 slightly:1 rev:1 restricted:1 taken:1 computationally:4 equation:9 ln:1 previously:2 needed:3 know:1 tractable:1 available:2 permit:1 observe:1 appropriate:1 generic:2 anymore:2 fad:2 robustly:1 robustness:1 top:2 running:2 graphical:1 cdma:2 log10:6 exploit:1 giving:1 caltagirone:1 added:2 quantity:4 already:1 dependence:1 usual:1 md:1 zdeborov:4 conceivable:1 mx:1 sci:2 considers:2 reason:1 provable:1 avr:1 sur:2 remind:2 performant:1 decimal:1 minimizing:1 acquire:1 equivalently:1 difficult:2 sharper:1 implementation:5 design:1 unknown:9 perform:3 conversion:1 vertical:1 observation:1 francesco:1 finite:3 situation:2 defining:1 looking:1 precise:1 perturbation:1 arbitrary:1 sharp:2 introduced:1 namely:1 paris:4 pair:2 required:1 acoustic:3 trans:4 able:3 suggested:1 beyond:1 usually:4 below:1 mismatch:2 reading:5 summarize:1 saclay:2 belief:5 bandlimited:1 demanding:1 treated:1 solvable:1 cea:2 representing:3 improve:1 coupled:1 fax:2 multiplication:1 asymptotic:2 expect:2 limitation:1 proven:1 digital:1 xp:1 dd:6 share:1 row:1 placed:1 last:1 drastically:1 taking:1 focussed:1 absolute:1 sparse:11 distributed:2 overcome:1 dimension:1 curve:1 transition:23 author:2 commonly:1 made:2 projected:1 jump:2 coincide:1 adaptive:1 far:3 transaction:2 reconstructed:3 approximate:7 dealing:2 assumed:5 unnecessary:1 knew:1 xi:1 continuous:1 iterative:2 learn:1 transfer:20 reasonably:1 ignoring:1 mse:6 investigated:1 main:2 montanari:3 linearly:1 motivation:1 noise:11 cvxr:1 x1:1 fig:6 en:1 precision:2 position:5 originated:1 xl:3 formula:1 theorem:1 specific:1 emphasized:1 showing:1 sensing:28 unser:1 intrinsic:1 workshop:2 effectively:1 yvette:2 magnitude:1 gap:1 generalizing:1 ilx:2 depicted:1 fc:1 expressed:1 contained:1 ters:1 acquiring:1 corresponds:1 daudet:3 donoho:8 replace:1 absence:1 change:1 determined:1 except:1 corrected:1 averaging:1 microphone:1 called:1 total:1 invariance:1 experimental:1 gauss:1 e:1 support:1 latter:1 arises:1 audio:1 tested:1 |
4,362 | 4,948 | Estimating LASSO Risk and Noise Level
Mohsen Bayati
Stanford University
[email protected]
Murat A. Erdogdu
Stanford University
[email protected]
Andrea Montanari
Stanford University
[email protected]
Abstract
We study the fundamental problems of variance and risk estimation in high dimensional statistical modeling. In particular, we consider the problem of learning
a coefficient vector ?0 ? Rp from noisy linear observations y = X?0 + w ? Rn
(p > n) and the popular estimation procedure of solving the `1 -penalized least
squares objective known as the LASSO or Basis Pursuit DeNoising (BPDN). In
this context, we develop new estimators for the `2 estimation risk k?b ? ?0 k2 and
the variance of the noise when distributions of ?0 and w are unknown. These can
be used to select the regularization parameter optimally. Our approach combines
Stein?s unbiased risk estimate [Ste81] and the recent results of [BM12a][BM12b]
on the analysis of approximate message passing and the risk of LASSO.
We establish high-dimensional consistency of our estimators for sequences of matrices X of increasing dimensions, with independent Gaussian entries. We establish validity for a broader class of Gaussian designs, conditional on a certain
conjecture from statistical physics.
To the best of our knowledge, this result is the first that provides an asymptotically
consistent risk estimator for the LASSO solely based on data. In addition, we
demonstrate through simulations that our variance estimation outperforms several
existing methods in the literature.
1
Introduction
In Gaussian random design model for the linear regression, we seek to reconstruct an unknown
coefficient vector ?0 ? Rp from a vector of noisy linear measurements y ? Rn :
y = X?0 + w,
(1.1)
where X ? Rn?p is a measurement (or feature) matrix with iid rows generated through a multivariate normal density. The noise vector, w, has iid entries with mean 0 and variance ? 2 . While this
problem is well understood in the low dimensional regime p n, a growing corpus of research
addresses the more challenging high-dimensional scenario in which p > n. The Basis Pursuit Denoising (BPDN) or LASSO [CD95, Tib96] is an extremely popular approach in this regime, that
finds an estimate for ?0 by minimizing the following cost function
CX,y (?, ?) ? (2n)?1 ky ? X?k22 + ?k?k1 ,
(1.2)
b X, y) = argmin CX,y (?, ?). This method is
with ? > 0. In particular, ?0 is estimated by ?(?;
?
well suited for the ubiquitous case in which ?0 is sparse, i.e. a small number of features effectively
predict the outcome. Since this optimization problem is convex, it can be solved efficiently, and fast
specialized algorithms have been developed for this purpose [BT09].
Research has established a number of important properties of LASSO estimator under suitable conditions on the design matrix X, and for sufficiently sparse vectors ?0 . Under irrepresentability
conditions, the LASSO correctly recovers the support of ?0 [ZY06, MB06, Wai09]. Under weaker
1
conditions such as restricted isometry or compatibility properties the correct recovery of support fails
however, the `2 estimation error k?b? ?0 k2 is of the same order as the one achieved by an oracle estimator that knows the support [CRT06, CT07, BRT09, BdG11]. Finally, [DMM09, RFG09, BM12b]
b for Gaussian design
provided asymptotic formulas for MSE or other operating characteristics of ?,
matrices X.
While the aforementioned research provides solid justification for using the LASSO estimator, it is
of limited guidance to the practitioner. For instance, a crucial question is how to set the regularization
parameter ?. This question becomes even more urgent for high-dimensional methods with multiple
regularization
terms. The oracle bounds of [CRT06, CT07, BRT09, BdG11] suggest to take ? =
?
c ? log p with c a dimension-independent constant (say c = 1 or 2). However, in practice a factor
two in ? can make a substantial difference for statistical applications. Related to this issue is the
question of estimating accurately the `2 error k?b ? ?0 k22 . The above oracle bounds have the form
?
k?b ? ?0 k22 ? C k?2 , with k = k?0 k0 the number of nonzero entries in ?0 , as long as ? ? c? log p.
As a consequence, minimizing the bound does not yield a recipe for setting ?. Finally, estimating
the noise level is necessary for applying these formulae, and this is in itself a challenging question.
The results of [DMM09, BM12b] provide exact asymptotic formulae for the risk, and its dependence
on the regularization parameter ?. This might appear promising for choosing the optimal value of
?, but has one serious drawback. The formulae of [DMM09, BM12b] depend on the empirical
distribution1 of the entries of ?0 , which is of course unknown, as well as on the noise level2 . A step
towards the resolution of this problem was taken in [DMM11], which determined the least favorable
noise level and distribution of entries, and hence suggested a prescription for ?, and a predicted risk
in this case. While this settles the question (in an asymptotic sense) from a minimax point of view,
it would be preferable to have a prescription that is adaptive to the distribution of the entries of ?0
and to the noise level.
Our starting point is the asymptotic results of [DMM09, DMM11, BM12a, BM12b]. These provide
a construction of an unbiased pseudo-data ?bu that is asymptotically Gaussian with mean ?0 . The
LASSO estimator ?b is obtained by applying a denoiser function to ?bu . We then use Stein?s Unbiased
Risk Estimate (SURE) [Ste81] to derive an expression for the `2 risk (mean squared error) of this
operation. What results is an expression for the mean squared error of the LASSO that only depends
on the observed data y and X. Finally, by modifying this formula we obtain an estimator for the
noise level.
We prove that these estimators are asymptotically consistent for sequences of design matrices X
with converging aspect ratio and iid Gaussian entries. We expect that the consistency holds far
beyond this case. In particular, for the case of general Gaussian design matrices, consistency holds
conditionally on a conjectured formula stated in [JM13] on the basis of the ?replica method? from
statistical physics.
For the sake of concreteness, let us briefly describe our method in the case of standard Gaussian
design that is when the design matrix X has iid Gaussian entries. We construct the unbiased pseudodata vector by
b
b 0] .
?bu = ?b + X T (y ? X ?)/[n
? k?k
(1.3)
Our estimator of the mean squared error is derived from applying SURE to unbiased pseudo-data.
b X, ?, ?b) where
In particular, our estimator is R(y,
.
b 0 /p ? 1 + kX T (y ? X ?)k
b 2 p(n ? k?k
b 0 )2
b X, ?, ? ) ? ? 2 2k?k
R(y,
(1.4)
2
b X, y) is the LASSO estimator and ?b = ky ? X ?k
b 2 /[n ? k?k
b 0 ].
Here ?(?;
Our estimator of the noise level is
b X, ?, ?b )/?
?
b2 /n = ?b2 ? R(y,
where ? = n/p. Although our rigorous results are asymptotic in the problem dimensions, we show
through numerical simulations that they are accurate already on problems with a few thousands of
1
2
The probability distribution that puts a point mass 1/p at each
? of the p entries of the vector.
Note that our definition of noise level ? corresponds to ? n in most of the compressed sensing literature.
2
MSE Estimation
Noise Level Estimation
Results in a Single Run
? Estimated MSE
True MSE
0.6
?
90% Confidence Bands
Estimated MSE
True MSE
0.3
Asymptotics
Asymptotic MSE
AMP.LASSO
N.LASSO
PMLE
RCV.LASSO
SCALED.LASSO
TRUE
x
0.4
^2 n
?
MSE
0.2
?
0.2
?
0.1
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
0.0
0.0
0.5
1.0
?
1.5
2.0
0.0
0.5
1.0
?
1.5
2.0
Figure 1: Red color represents the estimated values by our estimators and green color represents the
true values to be estimated. Left: MSE versus regularization parameter ?. Here, ? = 0.5, ? 2 /n =
0.2, X ? Rn?p with iid N1 (0, 1) entries where n = 4000. Right: ?
? 2 /n versus ?. Comparison of
different estimators of ? 2 under the same model parameters. Scaled Lasso?s prescribed choice of
(?, ?
? 2 /n) is marked with a bold x.
variables. To the best of our knowledge, this is the first method for estimating the LASSO mean
square error solely based on data. We compare our approach with earlier work on the estimation of
the noise level. The authors of [NSvdG10] target this problem by using a `1 -penalized maximum
log-likelihood estimator (PMLE) and a related method called ?Scaled Lasso? [SZ12] (also studied by
[BC13]) considers an iterative algorithm to jointly estimate the noise level and ?0 . Moreover, authors
of [FGH12] developed a refitted cross-validation (RCV) procedure for the same task. Under some
conditions, the aforementioned studies provide consistency results for their noise level estimators.
We compare our estimator with these methods through extensive numerical simulations.
The rest of the paper is organized as follows. In order to motivate our theoretical work, we start with
numerical simulations in Section 2. The necessary background on SURE and asymptotic distributional characterization of the LASSO is presented in Section 3. Finally, our main theoretical results
can be found in Section 4.
2
Simulation Results
In this section, we validate the accuracy of our estimators through numerical simulations. We also
analyze the behavior of our variance estimator as ? varies, along with four other methods. Two of
these methods rely on the minimization problem,
ky ? X?k22
k?k1
b?
(?,
b) = argmin?,?
+ h2 (?) + ? 3
,
2nh1 (?)
2 h3 (?)
where for PMLE h1 (?) = ? 2 , h2 (?) = log(?), h3 (?) = ? and for the Scaled Lasso h1 (?) = ?,
h2 (?) = ?/2, and h3 (?) = 1. The third method is a na??ve procedure that estimates the variance in
two steps: (i) use the LASSO to determine the relevant variables; (ii) apply ordinary least squares
on the selected variables to estimate the variance. The fourth method is Refitted Cross-Validation
(RCV) by [FGH12] which also has two-stages. RCV requires sure screening property that is the
model selected in its first stage includes all the relevant variables. Note that this requirement may
not be satisfied for many values of ?. In our implementation of RCV, we used the LASSO for
variable selection.
In our simulation studies, we used the LASSO solver l1 ls [SJKG07]. We simulated across 50
replications within each, we generated a new Gaussian design matrix X. We solved for LASSO
over 20 equidistant ??s in the interval [0.1, 2]. For each ?, a new signal ?0 and noise independent
from X were generated.
3
MSE Estimation
Noise Level Estimation
Results in a Single Run
? Estimated MSE
True MSE
0.6
?
90% Confidence Bands
Estimated MSE
True MSE
0.3
Asymptotics
Asymptotic MSE
AMP.LASSO
N.LASSO
PMLE
RCV.LASSO
SCALED.LASSO
TRUE
x
0.4
^2 n
?
MSE
0.2
?
0.2
?
?
?
0.1
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
0.0
0.0
0.5
1.0
?
1.5
2.0
0.0
0.5
1.0
?
1.5
2.0
Figure 2: Red color represents the estimated values by our estimators and green color represents
the true values to be estimated. Left: MSE versus regularization parameter ?. Here, ? = 0.5,
? 2 /n = 0.2, rows of X ? Rn?p are iid from Np (0, ?) where n = 5000 and ? has entries 1
on the main diagonal, 0.4 on above and below the main diagonal. Right: Comparison of different
estimators of ? 2 /n. Parameter values are the same as in Figure 1. Scaled Lasso?s prescribed choice
of (?, ?
? 2 /n) is marked with a bold x.
The results are demonstrated in Figures 1 and 2. Figure 1 is obtained using n = 4000, ? = 0.5 and
? 2 /n = 0.2. The coordinates of true signal independently get values 0, 1, ?1 with probabilities 0.9,
iid
0.05, 0.05 respectively. For each replication, we used a design matrix X where Xi,j ? N1 (0, 1).
Figure 2 is obtained with n = 5000 and same values of ? and ? 2 as in Figure 1. The coordinates
of true signal independently get values 0, 1, ?1 with probabilities 0.9, 0.05, 0.05 respectively. For
each replication, we used a design matrix X where each row is independently generated through
Np (0, ?) where ? has 1 on the main diagonal and 0.4 above and below the diagonal.
As can be seen from the figures, the asymptotic theory applies quite well to the finite dimensional
data. We refer reader to [BEM13] for a more detailed simulation analysis.
3
3.1
Background and Notations
Preliminaries and Definitions
First, we need to provide a brief introduction to approximate message passing (AMP) algorithm
suggested by [DMM09] and its connection to LASSO (see [DMM09, BM12b] for more details).
For an appropriate sequence of non-linear denoisers {?t }t?0 , the AMP algorithm constructs a sequence of estimates {?t }t?0 , pseudo-data {y t }t?0 , and residuals {t }t?0 where ?t , yt ? Rp and
t ? Rn . These sequences are generated according to the iteration
1
?t+1 = ?t (y t ) , y t = ?t + X T t /n , t = y ? X?t + t?1 ?t0 (y t?1 ) ,
(3.1)
?
where ? ? n/p and the algorithm is initialized with ?0 = 0 = 0 ? Rp . In addition, each denoiser
?t (?) is a separable function and its derivative is denoted by ?t0 ( ? ). Given a scalar function f and a
m
vector u ? Rm , we let f (u) denote
by applying f
Pmthe vector (f (u1 ), . . . , f (um )) ? R obtained
?1
m
component-wise and hui ? m
u
is
the
average
of
the
vector
u
?
R
.
i
i=1
Next, consider the state evolution for the AMP algorithm. For the random variable ?0 ? p?0 ,
a positive constant ? 2 and a given sequence of non-linear denoisers {?t }t?0 , define the sequence
{?t2 }t?0 iteratively by
1
2
?t+1
= Ft (?t2 ) ,
Ft (? 2 ) ? ? 2 + E{ [?t (?0 + ? Z) ? ?0 ]2 } ,
(3.2)
?
2
2
2
where ?0 = ? + E{?0 }/? and Z ? N1 (0, 1) is independent of ?0 . From Eq. 3.2, it is apparent
that the function Ft depends on the distribution of ?0 . It is shown in [BM12a] that the pseudo-data
4
y t has the same asymptotic distribution as ?0 + ?t Z. This result can be roughly interpreted as the
pseudo-data generated by AMP is the summation of the true signal and a normally distributed noise
which has zero mean. Its variance is determined by the state evolution. In other words, each iteration
produces a pseudo-data that is distributed normally around the true signal, i.e. yit ? ?0,i +N1 (0, ?t2 ).
The importance of this result will appear later when we use Stein?s method in order to obtain an
estimator for the MSE and the variance of the noise.
We will use state evolution in order to describe the behavior of a specific type of converging sequence
defined as the following:
Definition 1. The sequence of instances {?0 (n), X(n), ? 2 (n)}n?N indexed by n is said to be a
converging sequence if ?0 (n) ? Rp , X(n) ? Rn?p , ? 2 (n) ? R and p = p(n) is such that n/p ?
? ? (0, ?), ? 2 (n)/n ? ?02 for some ?0 ? R and in addition the following conditions hold:
to a probability measure
(a) The empirical distribution of {?0,i (n)}pi=1 , converges in distribution
Pp
p?0 on R with bounded 2nd moment. Further, as n ? ?, p?1 i=1 ?0,i (n)2 ? Ep?0 {?20 }.
(b) If {ei }1?i?p ? Rp denotes the standard basis, then n?1/2 maxi?[p] kX(n)ei k2 ? 1,
n?1/2 mini?[p] kX(n)ei k2 ? 1, as n ? ? with [p] ? {1, . . . , p}.
We provide rigorous results for the special class of converging sequences when entries of X are iid
N1 (0, 1) (i.e., standard gaussian design model). We also provide results (assuming Conjecture 4.4 is
correct) when rows of X are iid multivariate normal Np (0, ?) (i.e., general gaussian design model).
In order to discuss the LASSO connection for the AMP algorithm, we need to use a specific class of
denoisers and apply an appropriate calibration to the state evolution. Here, we provide briefly how
this can be done and we refer the reader to [BEM13] for a detailed discussion.
Denote by ? : R ? R+ ? R the soft thresholding denoiser where
(
x ? ? if x > ?
0
if ?? ? x ? ? .
?(x; ?) =
x + ? if x < ??
Also, denote by ? 0 ( ? ; ? ), the derivative of the soft-thresholding function with respect to its first
argument. We will use the AMP algorithm with the soft-thresholding denoiser ?t ( ? ) = ?( ? ; ?t )
along with a suitable sequence of thresholds {?t }t?0 in order to obtain a connection to the LASSO.
Let ? > 0 be a constant and at every iteration t, choose the threshold ?t = ??t . It was shown in
[DMM09] and [BM12b] that the state evolution has a unique fixed point ?? = limt?? ?t , and there
exists a mapping ? 7? ?? (?), between those two parameters. Further, it was shown that a function
? 7? ?(?) with domain (?min (?), ?) for some constant ?min , and given by
1
?(?) ? ??? 1 ? E ? 0 (?0 + ?? Z; ??? ) ,
?
admits a well-defined continuous and non-decreasing inverse ? : (0, ?) ? (?min , ?). In particular, the functions ? 7? ?(?) and ? 7? ?? (?) provide a calibration between the AMP algorithm and
the LASSO where ? is the regularization parameter.
3.2
Distributional Results for the LASSO
We will proceed by stating a distributional result on LASSO which was established in [BM12b].
Theorem 3.1. Let {?0 (n), X(n), ? 2 (n)}n?N be a converging sequence of instances of the standard
b ?) and the unbiased pseudoGaussian design model. Denote the LASSO estimator of ?0 (n) by ?(n,
u
T
b
b
b
b 0 ].
data generated by LASSO by ? (n, ?) ? ? + X (y ? X ?)/[n ? k?k
p
u
b
Then, as n ? ?, the empirical distribution of {?i , ?0,i }i=1 converges weakly to the joint distribution of (?0 + ?? Z, ?0 ) where ?0 ? p?0 , ?? = ?? (?(?)), Z ? N1 (0, 1) and ?0 and Z are
independent random variables.
The above theorem combined with the stationarity condition of the LASSO implies that the empiri-
cal distribution of {?bi , ?0,i }pi=1 converges weakly to the joint distribution of ?(?0 + ?? Z; ?? ), ?0
5
where ?? = ?(?)?? (?(?)). It is also important to emphasize a relation between the asymptotic
MSE, ??2 and the model variance. By Theorem 3.1 and the state evolution recursion, almost surely,
h
i
2
lim k?b ? ?0 k22 /p = E [?(?0 + ?? Z; ?? ) ? ?0 ] = ?(??2 ? ?02 ) ,
(3.3)
p??
which will be helpful to get an estimator for the noise level.
3.3
Stein?s Unbiased Risk Estimator
In [Ste81], Stein proposed a method to estimate the risk of an almost arbitrary estimator of the mean
of a multivariate normal vector. A generalized form of his method can be stated as the following.
Proposition 3.2. [Ste81]&[Joh12] Let x, ? ? Rn and V ? Rn?n be such that x ? Nn (?, V ).
Suppose that ?
?(x) ? Rn is an estimator of ? for which ?
?(x) = x + g(x) and that g : Rn ? Rn is
weakly differentiable and that ?i, j ? [n], E? [|xi gi (x)| + |xj gj (x)|] < ? where ? is the measure
corresponding to the multivariate Gaussian distribution Nn (?, V ). Define the functional
S(x, ?
?) ? Tr(V ) + 2Tr(V Dg(x)) + kg(x)k22 ,
where Dg is the vector derivative.
S(x, ?
?) is an unbiased estimator of the risk, i.e.
E? k?
?(x) ? ?k22 = E? [S(x, ?
?)].
In the literature of statistics, the above estimator is called ?Stein?s Unbiased Risk Estimator? or
SURE. The following remark will be helpful to build intuition about our approach.
Remark 1. If we consider the risk of soft thresholding estimator ?(xi ; ?) for ?i when xi ?
N1 (?i , ? 2 ) for i ? [m], the above formula suggests the functional
m
m
S(x, ?( ? ; ?))
2? 2 X
1 X
2
= ?2 ?
1{|xi |??} +
[min{|xi |, ?}] ,
m
m i=1
m i=1
as an estimator of the corresponding MSE.
4
4.1
Main Results
Standard Gaussian Design Model
We start by defining two estimators that are motivated by Proposition 3.2.
Definition 2. Define
b? (x, ? ) ? ?? 2 + 2? 2 h? 0 (x)i + h (?(x) ? x)2 i ,
R
where x ? Rm , ? ? R+ , and ? : R ? R is a suitable non-linear function. Also for y ? Rn and
b X, ?, ? ), the estimator of the mean squared error of LASSO where
X ? Rn?p denote by R(y,
T
2
b 2
b 0 ? p) + kX (y ? X ?)k2 .
b X, ?, ? ) ? ? (2k?k
R(y,
b 0 )2
p
p(n ? k?k
b X, ?, ? ) is just a special case of R
b? (x, ? ) when x = ?bu and ?( ? ) =
Remark 2. Note that R(y,
b 0 /p).
?( ? ; ? ) for ? = ?/(1 ? k?k
We are now ready to state the following theorem on the asymptotic MSE of the AMP:
Theorem 4.1. Let {?0 (n), X(n), ? 2 (n)}n?N be a converging sequence of instances of the standard
Gaussian design model. Denote the sequence of estimators of ?0 (n) by {?t (n)}t?0 , the pseudodata by {y t (n)}t?0 , and residuals by {t (n)}t?0 produced by AMP algorithm using the sequence
of Lipschitz continuous functions {?t }t?0 as in Eq. 3.1.
Then, as n ? ?, the mean squared error of the AMP algorithm at iteration t + 1 has the same limit
b? (y t , ?b) where ?bt = kt k2 /?n. More precisely, with probability one,
as R
t
b? (y t , ?bt ) .
lim k?t+1 ? ?0 k22 /p(n) = lim R
t
n??
n??
(4.1)
b? (y t , ?bt ) is a consistent estimator of the asymptotic mean squared error of the
In other words, R
t
AMP algorithm at iteration t + 1.
6
The above theorem allows us to accurately predict how far the AMP estimate is from the true signal
at iteration t + 1 and this can be utilized as a stopping rule for the AMP algorithm. Note that it was
shown in [BM12b] that the left hand side of Eq. (4.1) is E[(?t (?0 + ?t Z) ? ?0 )2 ]. Combining this
with the above theorem, we easily obtain,
b? (y t , ?bt ) = E[(?t (?0 + ?t Z) ? ?0 )2 ] .
lim R
t
n??
We state the following version of Theorem 4.1 for the LASSO.
Theorem 4.2. Let {?0 (n), X(n), ? 2 (n)}n?N be a converging sequence of instances of the standard
b ?). Then with probability
Gaussian design model. Denote the LASSO estimator of ?0 (n) by ?(n,
one,
b X, ?, ?b) ,
lim k?b ? ?0 k2 /p(n) = lim R(y,
n??
2
n??
b 2 /[n ? k?k
b 0 ]. In other words, R(y,
b X, ?, ?b) is a consistent estimator of the
where ?b = ky ? X ?k
asymptotic mean squared error of the LASSO.
Note that Theorem 4.2 enables us to assess the quality of the LASSO estimation without knowing
the true signal itself or the noise (or their distribution). The following corollary can be shown using
the above theorem and Eq. 3.3.
Corollary 4.3. In the standard Gaussian design model, the variance of the noise can be accurately
b X, ?, ?b)/? where ? = n/p and other variables are defined as in
estimated by ?
b2 /n ? ?b2 ? R(y,
Theorem 4.2. In other words, we have
lim ?
? 2 /n = ?02 ,
n??
(4.2)
almost surely, providing us a consistent estimator for the variance of the noise in the LASSO.
Remark 3. Theorems 4.1 and 4.2 provide a rigorous method for selecting the regularization parameter optimally. Also, note that obtaining the expression in Theorem 4.2 only requires solving
one solution path to LASSO problem versus k solution paths required by k-fold cross-validation
methods. Additionally, using the exponential convergence of AMP algorithm for the standard gaussian design model, proved by [BM12b], one can use O(log(1/)) iterations of AMP algorithm and
Theorem 4.1 to obtain the solution path with an additional error up to O().
4.2
General Gaussian Design Model
In Section 4.1, we devised our estimators based on the standard Gaussian design model. Motivated
by Theorem 4.2, we state the following conjecture of [JM13].
Let {?(n)}n?N be a sequence of inverse covariance matrices. Define the general Gaussian design
model by the converging sequence of instances {?0 (n), X(n), ? 2 (n)}n?N where for each n, rows
of design matrix X(n) are iid multivariate Gaussian, i.e. Np (0, ?(n)?1 ).
Conjecture 4.4 ([JM13]). Let {?0 (n), X(n), ? 2 (n)}n?N be a converging sequence of instances
under the general Gaussian design model with a sequence of proper inverse covariance matrices {?(n)}n?N . Assume that the empirical distribution of {(?0,i , ?ii }pi=1 converges weakly to
b ?)
the distribution of a random vector (?0 , ?). Denote the LASSO estimator of ?0 (n) by ?(n,
u
T
b
b
b
b
and the LASSO pseudo-data by ? (n, ?) ? ? + ?X (y ? X ?)/[n ? k?k0 ]. Then, for some
? ? R, the empirical distribution of {?0,i , ?biu , ?ii } converges weakly to the joint distribution of
(?0 , ?0 + ? ?1/2 Z, ?), where Z ? N1 (0, 1), and (?0 , ?) are independent random variables. Furb
b 0 ] converges weakly to N(0, ? 2 ).
ther, the empirical distribution of (y ? X ?)/[n
? k?k
A heuristic justification of this conjecture using the replica method from statistical physics is offered
in [JM13]. Using the above conjecture, we define the following generalized estimator of the linearly
transformed risk under the general Gaussian design model. The construction of the estimator is
essentially the same as before i.e. apply SURE to unbiased pseudo-data.
7
Definition 3. For an inverse covariance matrix ? and a suitable matrix V ? Rp?p , let W = V ?V T
and define an estimator of kV (?b ? ?)k22 /p as
2
b ? (y, X, ?, ?, V ) = ?
?
p
kV ?X T (y ? X ?)k
b 2
2
?1
Tr (WSS ) ? Tr (WS?S? ) ? 2Tr WSS
?
?
+
?
? S
?S
?
SS
b 0 )2
p(n ? k?k
where y ? Rn and X ? Rn?p denote the linear observations and the design matrix, respectively.
b ?) is the LASSO solution for penalty level ? and ? is a real number. S ? [p] is the
Further, ?(n,
support of ?b and S? is [p] \ S. Finally, for a p ? p matrix M and subsets D, E of [p] the notation
MDE refers to the |D| ? |E| sub-matrix of M obtained by intersection of rows with indices from D
and columns with indices from E.
Derivation of the above formula is rather complicated and we refer the reader to [BEM13] for a
detailed argument. A notable case, when V = I, corresponds to the mean squared error of LASSO
b X, ?, ? ) is just a special case of the estimator
for the general Gaussian design and the estimator R(y,
b
b I (y, X, ?, ?, I) = R(y,
b X, ?, ? ).
?? (y, X, ?, ?, V ). That is, when V = ? = I, we have ?
Now, we state the following analog of Theorem 4.2.
Theorem 4.5. Let {?0 (n), X(n), ? 2 (n)}n?N be a converging sequence of instances of the general
Gaussian design model with the inverse covariance matrices {?(n)}n?N . Denote the LASSO estib ?). If Conjecture 4.4 holds, then, with probability one,
mator of ?0 (n) by ?(n,
b ? (y, X, ?b, ?, I)
lim k?b ? ?0 k22 /p(n) = lim ?
n??
n??
b 2 /[n ? k?k
b 0 ]. In other words, ?
b ? (y, X, ?b, ?, I) is a consistent estimator of the
where ?b = ky ? X ?k
asymptotic MSE of the LASSO.
We will assume that a similar state evolution holds for the general design. In fact, for the general
case, replica method suggests the relation
1
lim k?? 2 (?b ? ?)k22 /p(n) = ?(? 2 ? ?02 ).
n??
Hence motivated by the Corollary 4.3, we state the following result on the general Gaussian design
model.
Corollary 4.6. Assume that Conjecture 4.4 holds. In the general Gaussian design model, the variance of the noise can be accurately estimated by
1
b ? (y, X, ?b, ?, ?? 2 )/? ,
?
? 2 (n, ?)/n ? ?b2 ? ?
where ? = n/p and other variables are defined as in Theorem 4.5. Also, we have
lim ?
? 2 /n = ?02 ,
n??
almost surely, providing us a consistent estimator for the noise level in LASSO.
Corollary 4.6, extends the results stated in Corollary 4.3 to the general Gaussian design matrices.
The derivation of formulas in Theorem 4.5 and Corollary 4.6 follows similar arguments as in the
standard Gaussian design model. In particular, they are obtained by applying SURE to the distributional result of Conjecture 4.4 and using the stationary condition of the LASSO. Details of this
derivation can be found in [BEM13].
8
References
[BC13]
A. Belloni and V. Chernozhukov, Least Squares after Model Selection in High-Dimensional Sparse
Models, Bernoulli (2013).
[BdG11]
P. B?uhlmann and S. Van de Geer, Statistics for high-dimensional data, Springer-Verlag Berlin
Heidelberg, 2011.
[BEM13]
M. Bayati, M. A. Erdogdu, and A. Montanari, Estimating LASSO Risk and Noise Level, long
version (in preparation), 2013.
[BM12a]
M. Bayati and A. Montanari, The dynamics of message passing on dense graphs, with applications
to compressed sensing, IEEE Trans. on Inform. Theory 57 (2012), 764?785.
[BM12b]
, The LASSO risk for gaussian matrices, IEEE Trans. on Inform. Theory 58 (2012).
[BRT09]
P. Bickel, Y. Ritov, and A. Tsybakov, Simultaneous Analysis of Lasso and Dantzig Selector, The
Annals of Statistics 37 (2009), 1705?1732.
[BS05]
Z. Bai and J. Silverstein, Spectral Analysis of Large Dimensional Random Matrices, Springer,
2005.
[BT09]
A. Beck and M. Teboulle, A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse
Problems, SIAM J. Imaging Sciences 2 (2009), 183?202.
[BY93]
Z. D. Bai and Y. Q. Yin, Limit of the Smallest Eigenvalue of a Large Dimensional Sample Covariance Matrix, The Annals of Probability 21 (1993), 1275?1294.
[CD95]
S.S. Chen and D.L. Donoho, Examples of basis pursuit, Proceedings of Wavelet Applications in
Signal and Image Processing III (San Diego, CA), 1995.
[CRT06]
E. C`andes, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate
measurements, Communications on Pure and Applied Mathematics 59 (2006), 1207?1223.
[CT07]
E. C`andes and T. Tao, The Dantzig selector: statistical estimation when p is much larger than n,
Annals of Statistics 35 (2007), 2313?2351.
[DMM09]
D. L. Donoho, A. Maleki, and A. Montanari, Message Passing Algorithms for Compressed Sensing, Proceedings of the National Academy of Sciences 106 (2009), 18914?18919.
[DMM11]
, The noise-sensitivity phase transition in compressed sensing, Information Theory, IEEE
Transactions on 57 (2011), no. 10, 6920?6941.
[FGH12]
J. Fan, S. Guo, and N. Hao, Variance estimation using refitted cross-validation in ultrahigh dimensional regression, Journal of the Royal Statistical Society: Series B (Statistical Methodology)
74 (2012), 1467?9868.
[JM13]
A. Javanmard and A. Montanari, Hypothesis testing in high-dimensional regression under the
gaussian random design model: Asymptotic theory, preprint available in arxiv:1301.4240, 2013.
[Joh12]
I. Johnstone, Gaussian estimation: Sequence and wavelet models, Book draft, 2012.
[MB06]
N. Meinshausen and P. B?uhlmann, High-dimensional graphs and variable selection with the lasso,
The Annals of Statistics 34 (2006), no. 3, 1436?1462.
[NSvdG10] P. B?uhlmann N. St?adler and S. van de Geer, `1 -penalization for Mixture Regression Models (with
discussion), Test 19 (2010), 209?285.
[RFG09]
S. Rangan, A. K. Fletcher, and V. K. Goyal, Asymptotic analysis of map estimation via the replica
method and applications to compressed sensing, 2009.
[SJKG07]
M. Lustig S. Boyd S. J. Kim, K. Koh and D. Gorinevsky, An Interior-Point Method for Large-Scale
l1-Regularized Least Squares, IEEE Journal on Selected Topics in Signal Processing 4 (2007),
606?617.
[Ste81]
C. Stein, Estimation of the mean of a multivariate normal distribution, The Annals of Statistics 9
(1981), 1135?1151.
[SZ12]
T. Sun and C. H. Zhang, Scaled sparse linear regression, Biometrika (2012), 1?20.
[Tib96]
R. Tibshirani, Regression shrinkage and selection with the lasso, J. Royal. Statist. Soc B 58 (1996),
267?288.
[Wai09]
M. J. Wainwright, Sharp thresholds for high-dimensional and noisy sparsity recovery using `1
constrained quadratic programming, Information Theory, IEEE Transactions on 55 (2009), no. 5,
2183?2202.
[ZY06]
P. Zhao and B. Yu, On model selection consistency of Lasso, The Journal of Machine Learning
Research 7 (2006), 2541?2563.
9
| 4948 |@word version:2 briefly:2 nd:1 seek:1 simulation:8 covariance:5 tr:5 solid:1 moment:1 bai:2 series:1 selecting:1 amp:17 outperforms:1 existing:1 numerical:4 enables:1 stationary:1 selected:3 characterization:1 provides:2 ct07:3 draft:1 zhang:1 along:2 replication:3 prove:1 combine:1 javanmard:1 roughly:1 behavior:2 andrea:1 growing:1 bpdn:2 decreasing:1 solver:1 increasing:1 becomes:1 provided:1 estimating:5 moreover:1 notation:2 bounded:1 mass:1 rcv:6 what:1 kg:1 argmin:2 interpreted:1 developed:2 pseudo:8 every:1 preferable:1 um:1 k2:7 scaled:7 rm:2 biometrika:1 normally:2 appear:2 wai09:2 positive:1 before:1 understood:1 limit:2 consequence:1 solely:2 path:3 might:1 studied:1 dantzig:2 meinshausen:1 suggests:2 challenging:2 limited:1 bi:1 unique:1 testing:1 practice:1 goyal:1 procedure:3 asymptotics:2 empirical:6 empiri:1 boyd:1 confidence:2 word:5 refers:1 suggest:1 get:3 interior:1 selection:5 cal:1 romberg:1 put:1 risk:18 context:1 applying:5 map:1 demonstrated:1 yt:1 starting:1 l:1 convex:1 independently:3 resolution:1 recovery:3 pure:1 estimator:49 rule:1 refitted:3 his:1 coordinate:2 justification:2 annals:5 construction:2 target:1 suppose:1 diego:1 exact:1 programming:1 hypothesis:1 utilized:1 distributional:4 observed:1 ft:3 ep:1 preprint:1 solved:2 thousand:1 sun:1 andes:2 substantial:1 intuition:1 dynamic:1 ste81:5 motivate:1 depend:1 mohsen:1 solving:2 weakly:6 basis:5 easily:1 joint:3 k0:2 derivation:3 fast:2 describe:2 outcome:1 choosing:1 quite:1 apparent:1 stanford:6 heuristic:1 larger:1 say:1 s:1 reconstruct:1 compressed:5 estib:1 statistic:6 gi:1 jointly:1 noisy:3 itself:2 sequence:23 differentiable:1 eigenvalue:1 relevant:2 combining:1 academy:1 validate:1 kv:2 ky:5 recipe:1 convergence:1 requirement:1 produce:1 converges:6 derive:1 develop:1 stating:1 h3:3 eq:4 soc:1 predicted:1 implies:1 drawback:1 correct:2 modifying:1 settle:1 preliminary:1 proposition:2 summation:1 hold:6 pseudodata:2 sufficiently:1 around:1 normal:4 fletcher:1 mapping:1 predict:2 bickel:1 smallest:1 purpose:1 estimation:16 favorable:1 chernozhukov:1 uhlmann:3 dmm09:8 minimization:1 gaussian:33 rather:1 shrinkage:2 nh1:1 broader:1 corollary:7 derived:1 denoisers:3 bernoulli:1 likelihood:1 rigorous:3 kim:1 sense:1 helpful:2 stopping:1 nn:2 inaccurate:1 bt:4 w:3 relation:2 transformed:1 tao:2 compatibility:1 issue:1 aforementioned:2 denoted:1 constrained:1 special:3 construct:2 represents:4 yu:1 np:4 t2:3 serious:1 few:1 dg:2 ve:1 national:1 beck:1 phase:1 n1:8 stationarity:1 screening:1 message:4 mixture:1 accurate:1 necessary:2 indexed:1 incomplete:1 initialized:1 guidance:1 theoretical:2 instance:8 column:1 soft:4 modeling:1 teboulle:1 earlier:1 ordinary:1 cost:1 entry:12 subset:1 optimally:2 varies:1 combined:1 adler:1 st:1 density:1 fundamental:1 siam:1 sensitivity:1 gorinevsky:1 bu:4 physic:3 na:1 squared:8 satisfied:1 choose:1 book:1 derivative:3 zhao:1 de:2 b2:5 bold:2 includes:1 coefficient:2 notable:1 depends:2 later:1 view:1 h1:2 analyze:1 red:2 start:2 level2:1 complicated:1 ass:1 square:5 accuracy:1 variance:14 characteristic:1 efficiently:1 mator:1 yield:1 silverstein:1 accurately:4 produced:1 iid:10 simultaneous:1 inform:2 definition:5 pp:1 recovers:1 proved:1 popular:2 knowledge:2 color:4 lim:11 ubiquitous:1 organized:1 methodology:1 ritov:1 done:1 just:2 stage:2 hand:1 ei:3 quality:1 validity:1 k22:11 unbiased:10 true:14 evolution:7 regularization:8 hence:2 maleki:1 nonzero:1 iteratively:1 bem13:5 conditionally:1 generalized:2 demonstrate:1 l1:2 image:1 wise:1 specialized:1 functional:2 analog:1 measurement:3 refer:3 consistency:5 mathematics:1 calibration:2 stable:1 operating:1 gj:1 multivariate:6 isometry:1 recent:1 conjectured:1 irrepresentability:1 scenario:1 certain:1 verlag:1 seen:1 additional:1 surely:3 determine:1 signal:10 ii:3 multiple:1 cross:4 long:2 prescription:2 devised:1 converging:10 regression:6 essentially:1 arxiv:1 iteration:7 limt:1 achieved:1 addition:3 background:2 interval:1 mb06:2 crucial:1 rest:1 sure:7 crt06:3 practitioner:1 iii:1 xj:1 equidistant:1 lasso:59 knowing:1 t0:2 expression:3 motivated:3 penalty:1 passing:4 proceed:1 remark:4 detailed:3 stein:7 tsybakov:1 band:2 statist:1 estimated:11 correctly:1 tibshirani:1 four:1 threshold:3 lustig:1 yit:1 montanar:1 replica:4 imaging:1 asymptotically:3 graph:2 concreteness:1 run:2 inverse:6 fourth:1 extends:1 almost:4 reader:3 bound:3 fold:1 fan:1 quadratic:1 oracle:3 precisely:1 belloni:1 rangan:1 sake:1 aspect:1 u1:1 argument:3 extremely:1 prescribed:2 min:4 separable:1 conjecture:9 according:1 across:1 rfg09:2 urgent:1 restricted:1 koh:1 taken:1 discus:1 know:1 pursuit:3 operation:1 available:1 apply:3 appropriate:2 spectral:1 rp:7 denotes:1 k1:2 build:1 establish:2 society:1 objective:1 question:5 already:1 dependence:1 diagonal:4 said:1 simulated:1 berlin:1 topic:1 considers:1 denoiser:4 assuming:1 cd95:2 index:2 mini:1 ratio:1 minimizing:2 providing:2 biu:1 hao:1 stated:3 design:34 implementation:1 murat:1 proper:1 unknown:3 observation:2 finite:1 defining:1 communication:1 rn:16 arbitrary:1 sharp:1 required:1 extensive:1 connection:3 established:2 ther:1 trans:2 address:1 beyond:1 suggested:2 distribution1:1 below:2 regime:2 sparsity:1 green:2 royal:2 wainwright:1 suitable:4 rely:1 regularized:1 residual:2 recursion:1 minimax:1 brief:1 ready:1 literature:3 asymptotic:17 ultrahigh:1 expect:1 versus:4 bayati:4 validation:4 h2:3 penalization:1 offered:1 consistent:7 thresholding:5 pi:3 row:6 course:1 penalized:2 zy06:2 side:1 weaker:1 johnstone:1 erdogdu:3 sparse:4 distributed:2 van:2 dimension:3 transition:1 author:2 adaptive:1 san:1 far:2 transaction:2 approximate:2 emphasize:1 selector:2 corpus:1 xi:6 continuous:2 iterative:2 additionally:1 promising:1 ca:1 obtaining:1 heidelberg:1 mse:22 domain:1 main:5 montanari:5 linearly:1 dense:1 noise:26 fails:1 sub:1 exponential:1 third:1 wavelet:2 formula:9 theorem:20 specific:2 sensing:5 maxi:1 admits:1 exists:1 effectively:1 importance:1 hui:1 kx:4 chen:1 suited:1 cx:2 intersection:1 yin:1 scalar:1 applies:1 springer:2 corresponds:2 conditional:1 marked:2 donoho:2 towards:1 lipschitz:1 determined:2 denoising:2 called:2 geer:2 select:1 support:4 guo:1 tib96:2 preparation:1 |
4,363 | 4,949 | A Graphical Transformation for Belief Propagation:
Maximum Weight Matchings and Odd-Sized Cycles
Jinwoo Shin
Andrew E. Gelfand ?
Department of Electrical Engineering
Department of Computer Science
Theoretical Division &
Korea Advanced Institute of Science and Technology
University of California, Irvine
Center for Nonlinear Studies
Daejeon, 305-701, Republic of Korea
Irvine, CA 92697-3435, USA
Los Alamos National Laboratory
[email protected]
[email protected]
Los Alamos, NM 87545, USA
Michael Chertkov
[email protected]
Abstract
Max-product ?belief propagation? (BP) is a popular distributed heuristic for finding the Maximum A Posteriori (MAP) assignment in a joint probability distribution represented by a Graphical Model (GM). It was recently shown that BP converges to the correct MAP assignment for a class of loopy GMs with the following
common feature: the Linear Programming (LP) relaxation to the MAP problem is
tight (has no integrality gap). Unfortunately, tightness of the LP relaxation does
not, in general, guarantee convergence and correctness of the BP algorithm. The
failure of BP in such cases motivates reverse engineering a solution ? namely,
given a tight LP, can we design a ?good? BP algorithm.
In this paper, we design a BP algorithm for the Maximum Weight Matching
(MWM) problem over general graphs. We prove that the algorithm converges
to the correct optimum if the respective LP relaxation, which may include inequalities associated with non-intersecting odd-sized cycles, is tight. The most
significant part of our approach is the introduction of a novel graph transformation
designed to force convergence of BP. Our theoretical result suggests an efficient
BP-based heuristic for the MWM problem, which consists of making sequential,
?cutting plane?, modifications to the underlying GM. Our experiments show that
this heuristic performs as well as traditional cutting-plane algorithms using LP
solvers on MWM problems.
1
Introduction
Graphical Models (GMs) provide a useful representation for reasoning in a range of scientific fields
[1, 2, 3, 4]. Such models use a graph structure to encode the joint probability distribution, where
vertices correspond to random variables and edges (or lack of thereof) specify conditional dependencies. An important inference task in many applications involving GMs is to find the most likely
assignment to the variables in a GM - the maximum a posteriori (MAP) configuration. Belief Propagation (BP) is a popular algorithm for approximately solving the MAP inference problem. BP is
an iterative, message passing algorithm that is exact on tree structured GMs. However, BP often
shows remarkably strong heuristic performance beyond trees, i.e. on GMs with loops. Distributed
implementation, associated ease of programming and strong parallelization potential are among the
main reasons for the popularity of the BP algorithm, e.g., see the parallel implementations of [5, 6].
The convergence and correctness of BP was recently established for a certain class of loopy GM
formulations of several classic combinatorial optimization problems, including matchings [7, 8, 9],
perfect matchings [10], independent sets [11] and network flows [12]. The important common
?
Also at Theoretical Division of Los Alamos National Lab.
1
feature of these instances is that BP converges to a correct MAP assignment when the Linear Programming (LP) relaxation of the MAP inference problem is tight, i.e., it shows no integrality gap.
While this demonstrates that LP tightness is necessary for the convergence and correctness of BP,
it is unfortunately not sufficient in general. In other words, BP may not work even when the corresponding LP relaxation to the MAP inference problem is tight. This motivates a quest for improving
BP-based MAP solvers so that they work when the LP is tight.
In this paper, we consider a specific class of GMs corresponding to the Maximum Weight Matching
(MWM) problem and study if BP can be used as an iterative, message passing-based LP solver
when the MWM LP (relaxation) is tight. It was recently shown [15] that a MWM can be found in
polynomial time by solving a carefully chosen sequence of LP relaxations, where the sequence of
LPs are formed by adding and removing sets of so-called ?blossom? inequalities [13] to the base
LP relaxation. Utilizing successive LP relaxations to solve the MWM problem is an example of
the popular cutting plane method for solving combinatorial optimization problems [14]. While the
approach in [15] is remarkable in that one needs only a polynomial number of ?cut? inequalities,
it unfortunately requires solving an emerging sequence of LPs via traditional, centralized methods
(e.g., ellipsoid, interior-point or simplex) that may not be practical for large-scale problems. This
motivates our search for an efficient and distributed BP-based LP solver for this class of problems.
Our work builds upon that of Sanghavi, Malioutov and Willsky [8], who studied BP for the GM
formulation of the MWM problem on an arbitrary graph. The authors showed that max-product BP
converges to the correct, MAP solution if the base LP relaxation with no blossom - referred to herein
as MWM-LP - is tight. Unfortunately, the tightness is not guaranteed in general, and the convergence
and correctness for max-product BP do not readily extend to a GM with blossom constraints.
To resolve this issue, we propose a novel GM formulation of the MWM problem and show that maxproduct BP on this new GM converges to the MWM assignment as long as the MWM-LP relaxation
with blossom constraints is tight. The only restriction placed on our GM construction is that the
set of blossom constraints added to the base MWM-LP be non-intersecting (in edges). Our GM
construction is motivated by the so-called ?degree-two? (DT) condition, which requires that every
variable in the GM be associated to at most two factor functions. The DT condition is necessary
for analysis of BP using the computational tree technique, developed and advanced in [7, 8, 12, 16,
18, 19]. Note, that the DT condition is not satisfied by the standard MWM GM formulation, and
hence, we design a new GM that satisfies the DT condition via a clever graphical transformation namely, collapsing odd-sized cycles and defining new weights on the contracted graph. Importantly,
the MAP assignments of the two GMs are in one-to-one correspondence guaranteeing that a solution
to the original problem can be recovered.
Our theoretical result suggests a cutting-plane approach to the MWM problem, where BP is used
as the LP solver. In particular, we examine the BP solution to identify odd-sized cycle constraints
- ?cuts? - to add to the MWM-LP relaxation; then construct a new GM using our graphical transformation, run BP and repeat. We evaluate this heuristic empirically and show that its performance
is close to a traditional cutting-plane approach employing an LP solver rather than BP. Finally, we
note that the DT condition may neither be sufficient nor necessary for BP to work. It was necessary,
however, to provide theoretical guarantees for the special class of GMs considered. To our knowledge, our result is the first to suggest how to ?fix? BP via a graph transformation so that it works
properly, i.e., recovers the desired LP solution. We believe that our success in crafting a graphical
transformation will offer useful insight into the design and analysis of BP algorithms for a wider
class of problems.
Organization. In Section 2, we introduce a standard GM formulation of the MWM problem as well
as the corresponding BP and LP. In Section 3, we introduce our new GM and describe performance
guarantees of the respective BP algorithm. In Section 4, we describe a cutting-plane(-like) method
using BP for the MWM problem and show its empirical performance for random MWM instances.
2
2
Preliminaries
2.1
Graphical Model for Maximum Weight Matchings
A joint distribution of n (discrete) random variables Z = [Zi ] ? ?n is called a Graphical Model
(GM) if it factorizes as follows: for z = [zi ] ? ?n ,
Y
Pr[Z = z] ?
?? (z? ),
(1)
??F
where F is a collection of subsets of ?, z? = [zi : i ? ? ? ?] is a subset of variables, and ?? is
some (given) non-negative function. The function ?? is called a factor (variable) function if |?| ? 2
(|?| = 1). For variable functions ?? with ? = {i}, we simply write ?? = ?i . One calls z a valid
assignment if Pr[Z = z] > 0. The MAP assignment z ? is defined as
z ? = arg maxn Pr[Z = z].
z??
Let us introduce the Maximum Weight Matching (MWM) problem and its related GM. Suppose we
are given an undirected graph G = (V, E) with weights {we : e ? E} assigned to its edges. A
matching is a set of edges without common vertices. The weight of a matching is the sum of corresponding edge weights. The MWM problem consists of finding a matching of maximum weight.
Associate a binary random variable with each edge X = [Xe ] ? {0, 1}|E| and consider the probability distribution: for x = [xe ] ? {0, 1}|E| ,
Y
Y
Y
Pr[X = x] ?
ewe xe
?i (x)
?C (x),
(2)
e?E
where
(
P
1 if
e??(i) xe ? 1
?i (x) =
0 otherwise
i?V
C?C
(
P
1 if
e?E(C) xe ?
?C (x) =
0 otherwise
and
|C|?1
2
.
Here C is a set of odd-sized cycles C ? 2V , ?(i) = {(i, j) ? E} and E(C) = {(i, j) ? E :
i, j ? C}. Throughout the manuscript, we assume that cycles are non-intersecting in edges, i.e.,
E(C1 ) ? E(C2 ) = ? for all C1 , C2 ? C. It is easy to see that a MAP assignment x? for the GM (2)
induces a MWM in G. We also assume that the MAP assignment is unique.
2.2
Belief Propagation and Linear Programming for Maximum Weight Matchings
In this section, we introduce max-product Belief Propagation (BP) and the Linear Programming
(LP) relaxation to computing the MAP assignment in (2). We first describe the BP algorithm for the
general GM (1), then tailor the algorithm to the MWM GM (2). The BP algorithm updates the set of
2|?| messages {mt??i (zi ), mti?? (zi ) : zi ? ?} between every variable i and its associated factors
? ? Fi = {? ? F : i ? ?, |?| ? 2} using the following update rules:
mt+1
??i (zi ) =
X
Y
?? (z 0 )
z 0 :zi0 =zi
mtj?? (zj0 )
mt+1
i?? (zi ) = ?i (zi )
and
Y
?0 ?F
j??\i
m0??i (?)
Here t denotes time and initially
{mi?? (?), m??i (?))}, the BP (max-marginal)
ni (zi ) = ?i (zi )
= m0i?? (?)
beliefs {ni (zi )}
Y
??Fi
mt?0 ?i (zi ).
i \?
= 1. Given a set of messages
are defined as follows:
m??i (zi ).
For the GM (2), we let nte (?) to denote the BP belief on edge e ? E at time t. The algorithm outputs
|E|
the MAP estimate at time t, xBP (t) = [xBP
, using the using the beliefs and the rule:
e (t)] ? [0, ?, 1]
?
t
t
?
?1 if ne (0) < ne (1)
BP
t
xe (t) = ? if nij (0) = nte (1) .
?
?0 if nt (0) > nt (1)
e
e
The LP relaxation to the MAP problem for the GM (2) is:
C-LP :
max
X
we xe
e?E
s.t.
X
e??(i)
xe ? 1,
?i ? V,
X
e?E(C)
3
xe ?
|C| ? 1
,
2
?C ? C,
xe ? [0, 1].
Observe that if the solution xC-LP to C-LP is integral, i.e., xC-LP ? {0, 1}|E| , then it is a MAP
assignment, i.e., xC-LP = x? . Sanghavi, Malioutov and Willsky [8] proved the following theorem
connecting the performance of BP and C-LP in a special case:
Theorem 2.1. If C = ? and the solution of C-LP is integral and unique, then xBP (t) under the GM
(2) converges to the MWM assignment x? .
Adding small random component to every weight guarantees the uniqueness condition required by
Theorem 2.1. A natural hope is that Theorem 2.1 extends to a non-empty C since adding more cycles
can help to reduce the integrality gap of C-LP. However, the theorem does not hold when C 6= ?. For
example, BP does not converge for a triangle graph with edge weights {2, 1, 1} and C consisting of
the only cycle. This is true even though the solution to its C-LP is unique and integral.
3
A Graphical Transformation for Convergent & Correct BP
The loss of convergence and correctness of BP when the MWM LP is tight (and unique) but C 6= ?
motivates the work in this section. We resolve the issue by designing a new GM, equivalent to the
original GM, such that when BP is run on this new GM it converges to the MAP/MWM assignment
whenever the LP relaxation is tight and unique - even if C 6= ?. The new GM is defined on an
auxiliary graph G0 = (V 0 , E 0 ) with new weights {we0 : e ? E 0 }, as follows:
V 0 = V ? {iC : C ? C},
E 0 = E ? {(iC , j) : j ? V (C), C ? C} \ {e : e ? ?C?C E(C)}
( P
0
1
(?1)dC (j,e ) we0 if e = (iC , j) for some C ? C
0
we0 = 2 e ?E(C)
.
we
otherwise
Here dC (j, e) is the graph distance of j and e in cycle C = (j1 , j2 , . . . , jk ), e.g., if e = (j2 , j3 ),
then dC (j1 , e) = 1.
Figure 1: Example of original graph G (left) and new graph G0 (right) after collapsing cycle C =
(1, 2, 3, 4, 5). In the new graph G0 , edge weight w1C = 1/2(w12 ? w23 + w34 ? w45 + w15 ).
Associate a binary variable with each new edge and consider the new probability distribution on
0
y = [ye : e ? E 0 ] ? {0, 1}|E | :
Y 0 Y
Y
Pr[Y = y] ?
ew e ye
?i (y)
?C (y),
(3)
e?E 0
i?V
C?C
where
?i (y) =
?
?1 if
P
ye ? 1
e??(i)
?0 otherwise
?
P
0 if
ye > |C| ? 1
?
?
?
e??(iC )
?
P
?C (y) = 0 if
(?1)dC (j,e) yiC ,j ?
/ {0, 2} for some e ? E(C) .
?
j?V
(C)
?
?
?
1 otherwise
It is not hard to check that the number of operations required to update messages at each round of
BP under the above GM is O(|V ||E|), as messages updates involving factor ?C require solving a
MWM problem on a simple cycle ? which can be done efficiently via dynamic programming in time
O(|C|) ? and the summation of the numbers of edges of non-intersecting cycles is at most |E|. We
are now ready to state the main result of this paper.
Theorem 3.1. If the solution of C-LP is integral and unique, then the BP-MAP estimate y BP (t)
under the GM (3) converges to the corresponding MAP assignment y ? . Furthermore, the MWM
assignment x? is reconstructible from y ? as:
( P
S
1
dC (j,e) ?
yiC ,j if e ? C?C E(C)
?
j?V (C) (?1)
2
.
(4)
xe =
ye?
otherwise
4
The proof of Theorem 3.1 is provided in the following sections. We also establish the convergence
time of the BP algorithm under the GM (3) (see Lemma 3.2). We stress that the new GM (3) is
designed so that each variable is associated to at most two factor nodes. We call this condition,
which did not hold for the original GM (2), the ?degree-two? (DT) condition. The DT condition
will play a critical role in the proof of Theorem 3.1. We further remark that even under the DT
condition and given tightness/uniqueness of the LP, proving correctness and convergence of BP is
still highly non trivial. In our case, it requires careful study of the computation tree induced by BP
with appropriate truncations at its leaves.
3.1
Main Lemma for Proof of Theorem 3.1
Let us introduce the following auxiliary LP over the new graph and weights.
C-LP0 :
X
max
we0 ye
e?E 0
s.t.
X
ye ? 1,
?i ? V,
?e ? E 0 ,
ye ? [0, 1],
(5)
e??(i)
X
(?1)dC (j,e) yiC ,j ? [0, 2],
?e ? E(C),
j?V (C)
X
ye ? |C| ? 1,
?C ? C.
(6)
e??(iC )
Consider the following one-to-one linear mapping between x = [xe : e ? E] and y = [ye : e ? E 0 ]:
(P
ye =
e0 ?E(C)??(i)
xe0
xe
if e = (i, iC )
otherwise
( P
1
2
xe =
j?V (C) (?1)
dC (j,e)
yiC ,j
ye
S
if e ? C?C E(C)
.
otherwise
Under the mapping, one can check that C-LP = C-LP0 and if the solution xC-LP of C-LP is unique
0
0
and integral, the solution y C-LP of C-LP0 is as well, i.e., y C-LP = y ? . Hence, (4) in Theorem 3.1
follows. Furthermore, since the solution y ? = [ye? ] to C-LP0 is unique and integral, there exists c > 0
such that
w0 ? (y ? ? y)
c=
inf
,
0
?
y6=y :y is feasible to C-LP
|y ? ? y|
where w0 = [we0 ]. Using this notation, we establish the following lemma characterizing performance
of the max-product BP over the new GM (3). Theorem 3.1 follows from this lemma directly.
0
0
Lemma 3.2. If the solution y C-LP of C-LP0 is integral and unique, i.e., y C-LP = y ? , then
? If ye? = 1, nte [1] > nte [0] for all t >
0
6wmax
c
+ 6,
? If ye? = 0, nte [1] < nte [0] for all t >
0
6wmax
c
+ 6,
0
where nte [?] denotes the BP belief of edge e at time t under the GM (3) and wmax
= maxe?E 0 |we0 |.
3.2
Proof of Lemma 3.2
This section provides the complete proof of Lemma 3.2. We focus here on the case of ye? = 1, while
translation of the result to the opposite case of ye? = 0 is straightforward. To derive a contradiction,
assume that nte [1] ? nte [0] and construct a tree-structured GM Te (t) of depth t + 1, also known as
the computational tree, using the following scheme:
0
1. Add a copy of Ye ? {0, 1} as the (root) variable (with variable function ewe Ye ).
2. Repeat the following t times for each leaf variable Ye on the current tree-structured GM.
2-1. For each i ? V such that e ? ?(i) and ?i is not associated to Ye of the current model, add ?i
as a factor (function) with copies of {Ye0 ? {0, 1} : e0 ? ?(i) \ e} as child variables (with
0
corresponding variable functions, i.e., {ewe0 Ye0 }).
2-2. For each C ? C such that e ? ?(iC ) and ?C is not associated to Ye of the current model, add
?C as a factor (function) with copies of {Ye0 ? {0, 1} : e0 ? ?(iC ) \ e} as child variables (with
0
corresponding variable functions, i.e., {ewe0 Ye0 }).
5
It is known from [17] that there exists a MAP configuration y TMAP on Te (t) with yeTMAP = 0 at the
root variable. Now we construct a new assignment y NEW on the computational tree Te (t) as follows.
1. Initially, set y NEW ? y TMAP and e is the root of the tree.
2. y NEW ? FLIPe (y NEW ).
3. For each child factor ?, which is equal to ?i (i.e., e ? ?(i)) or ?C (i.e., e ? ?(iC )), associated with
e,
(a) If ? is satisfied by y NEW and FLIPe (y ? ) (i.e., ?(y NEW ) = ?(FLIPe (y ? )) = 1), then do
nothing.
(b) Else if there exists a e?s child e0 through factor ? such that yeNEW
6= ye?0 and ? is satisfied by
0
FLIPe0 (y NEW ) and FLIPe0 (FLIPe (y ? )), then go to the step 2 with e ? e0 .
(c) Otherwise, report ERROR.
To aid readers understanding, we provide a figure describing an example of the above construction
in our technical report [21]. In the construction, FLIPe (y) is the 0-1 vector made by flipping (i.e.,
changing from 0 to 1 or 1 to 0) the e?s position in y. We note that there exists exactly one child
factor ? in step 3 and we only choose one child e0 in step (b) (even though there are many possible
candidates). Due to this reason, flip operations induce a path structure P in tree Te (t).1 Now we
state the following key lemma for the above construction of y NEW .
Lemma 3.3. ERROR is never reported in the construction described above.
Proof. The case when ? = ?i at the step 3 is easy, and we only provide the proof for the case when
? = ?C . We also assume that yeNEW is flipped as 1 ? 0 (i.e., ye? = 0), where the proof for the
case 0 ? 1 follows in a similar manner. First, one can observe that y satisfies ?C if and only if y
is the 0-1 indicator vector of a union of disjoint even paths in the cycle C. Since yeNEW is flipped as
1 ? 0, the even path including e is broken into an even (possibly, empty) path and an odd (always,
non-empty) path. We consider two cases: (a) there exists e0 within the odd path (i.e., yeNEW
= 1)
0
such that ye?0 = 0 and flipping yeNEW
as 1 ? 0 broke the odd path into two even (disjoint) paths; (b)
0
there exists no such e0 within the odd path.
For the first case (a), it is easy to see that we can maintain the structure of disjoint even paths in
y NEW after flipping yeNEW
as 1 ? 0, i.e., ? is satisfied by FLIPe0 (y NEW ). For the second case (b),
0
0
we choose e as a neighbor of the farthest end point (from e) in the odd path, i.e., yeNEW
= 0 (before
0
flipping). Then, ye?0 = 1 since y ? satisfies factor ?C and induces a union of disjoint even paths in
the cycle C. Therefore, if we flip yeNEW
as 0 ? 1, then we can still maintain the structure of disjoint
0
even paths in y NEW , ? is satisfied by FLIPe0 (y NEW ). The proof for the case of the ? satisfied by
FLIPe0 (FLIPe (y ? )) is similar. This completes the proof of Lemma 3.3.
Due to how it is constructed y NEW is a valid configuration, i.e., it satisfies all the factor functions in
Te (t). Hence, it suffices to prove that w0 (y NEW ) > w0 (y TMAP ), which contradicts to the assumption
that y M AP is a MAP configuration on Te (t). To this end, for (i, j) ? E 0 , let n0?1
and n1?0
be the
ij
ij
number of flip operations 0 ? 1 and 1 ? 0 for copies of (i, j) in the step 2 of the construction of
Te (t). Then, one derives
w0 (y NEW ) = w0 (y TMAP ) + w0 ? n0?1 ? w0 ? n1?0 ,
1?0
where n0?1 = [n0?1
= [n1?0
ij ] and n
ij ]. We consider two cases: (i) the path P does not arrive
at a leave variable of Te (t), and (ii) otherwise. Note that the case (i) is possible only when the
condition in the step (a) holds during the construction of y NEW .
?
?
Case (i). In this case, we define yij
:= yij
+ ?(n1?0
? n0?1
ij
ij ), and establish the following lemma.
Lemma 3.4. y ? is feasible to C-LP0 for small enough ? > 0.
Proof. We have to show that y ? satisfies (5) and (6). Here, we prove that y ? satisfies (6) for small
enough ? > 0, and the proof for (5) can be argued in a similar manner. To this end, for given C ? C,
1
P may not have an alternating structure since both yeNEW and its child yeNEW
can be flipped in a same way.
0
6
we consider the following polytope PC :
X
yiC ,j ? |C| ? 1,
yiC ,j ? [0, 1],
X
?j ? C,
j?V (C)
(?1)dC (j,e) yiC ,j ? [0, 2],
?e ? E(C).
j?V (C)
?
We have to show that yC
= [ye : e ? ?(iC )] is within the polytope. It is easy to see that the
condition of the step (a) never holds if ? = ?C in the step 3. For the i-th copy of ?C in P ? Te (t),
?
?
?
we set yC
(i) = FLIPe0 (FLIPe (yC
)) in the step (b), where yC
(i) ? PC . Since the path P does not
hit a leave variable of Te (t), we have
1 XN ?
1
?
yC (i) = yC
+
n1?0
? n0?1
,
C
C
i=1
N
N
PN ?
where N is the number of copies of ?C in P ? Te (t). Furthermore, N1 i=1 yC
(i) ? PC due to
?
?
yC
(i) ? PC . Therefore, yC
? PC if ? ? 1/N . This completes the proof of Lemma 3.4.
The above lemma with w0 (y ? ) > w0 (y ? ) (due to the uniqueness of y ? ) implies that w0 ? n0?1 >
w0 ? n1?0 , which leads to w0 (y NEW ) > w0 (y TMAP ).
Case (ii). We consider the case when only one end of P hits a leave variable Ye of Te (t),
where the proof of the other case follows in a similar manner. In this case, we define
?
?
1?0
0?1
yij
:= yij
+ ?(m1?0
? m0?1
= [m1?0
= [m0?1
ij
ij ), where m
ij ] and m
ij ] is constructed as follows:
1. Initially, set m1?0 , m0?1 by n1?0 , n0?1 .
2. If yeNEW is flipped as 1 ? 0 and it is associated to a cycle parent factor ?C for some C ? C, then decrease
m1?0
by 1 and
e
by 1.
is flipped from 1 ? 0, then decrease m1?0
2-1 If the parent yeNEW
0
e0
2-2 Else if there exists a ?brother? edge e00 ? ?(iC ) of e such that ye?00 = 1 and ?C is satisfied by
FLIPe00 (FLIPe0 (y ? )), then increase m0?1
by 1.
e00
2-3 Otherwise, report ERROR.
3. If yeNEW is flipped as 1 ? 0 and it is associated to a vertex parent factor ?i for some i ? V , then decrease
m1?0
by 1.
e
4. If yeNEW is flipped as 0 ? 1 and it is associated to a vertex parent factor ?i for some i ? V , then decrease
by 1, where e0 ? ?(i) is the ?parent? edge of e, and
m0?1
, m1?0
e
e0
4-1 If the parent yeNEW
is associated to a cycle parent factor ?C ,
0
4-1-1 If the grad-parent yeNEW
is flipped from 1 ? 0, then decrease m1?0
by 1.
00
e00
4-1-2 Else if there exists a ?brother? edge e000 ? ?(iC ) of e0 such that ye?000 = 1 and ?C is satisfied by
FLIPe000 (FLIPe00 (y ? )), then increase m0?1
e000 by 1.
4-1-3 Otherwise, report ERROR.
4-2 Otherwise, do nothing.
We establish the following lemmas.
Lemma 3.5. ERROR is never reported in the above construction.
Lemma 3.6. y ? is feasible to C-LP0 for small enough ? > 0.
Proofs of Lemma 3.5 and Lemma 3.6 are analogous to those of Lemma 3.3 and Lemma 3.4, respectively. From Lemma 3.6, we have
c ?
? w0 (m0?1 ? m1?0 )
w0 ? (y ? ? y ? )
?
?
?
|y ? y |
?(t ? 3)
?
0
? w0 (n0?1 ? n1?0 ) + 3wmax
,
?(t ? 3)
where |y ? ? y ? | ? ?(t ? 3) follows from the fact that P hits a leave variable of Te (t) and there are
at most three increases or decreases in m0?1 and m1?0 in the above construction. Hence,
0
3wmax
0
w0 (n0?1 ? n1?0 ) ? c(t ? 3) ? 3wmax
>0
if t >
+ 3,
c
7
which implies w0 (y NEW ) > w0 (y TMAP ). If both ends of P hit leave variables of Te (t), we need
6w0
t > cmax + 6. This completes the proof of Lemma 3.2.
4
Cutting-Plane Algorithm using Belief Propagation
In the previous section we established that BP on a carefully designed GM using non-intersecting
odd-sized cycles solves the MWM problem when the corresponding MWM-LP relaxation is tight.
However, finding a collection of odd-sized cycles to ensure tightness of the MWM-LP is a challenging task. In this section, we provide a heuristic algorithm which we call CP-BP (cutting-plane using
BP) for this task. It consists of making sequential, ?cutting plane?, modifications to the underlying
LP (and corresponding GM) using the output of the BP algorithm in the previous step. CP-BP is
defined as follows:
1. Initialize C = ?.
2. Run BP on the GM in (3) for T iterations
?
?
if nTe [1] > nTe [0] and nTe ?1 [1] > nTe ?1 [0]
?1
3. For each edge e ? E, set ye = 0
if nTe [1] < nTe [0] and nTe ?1 [1] < nTe ?1 [0] .
?
?1/2 otherwise
4. Compute x = [xe ] using y = [ye ] as per (4), and terminate if x ?
/ {0, 1/2, 1}|E| .
5. If there is no edge e with xe = 1/2, return x. Otherwise, add a non-intersecting odd-sized cycle of
edges {e : xe = 1/2} to C and go to step 2; or terminate if no such cycle exists.
In the above procedure, BP can be replaced by an LP solver to directly obtain x in step 4. This
results in a traditional cutting-plane LP (CP-LP) method for the MWM problem [20]. The primary
reason why we design CP-BP to terminate when x ?
/ {0, 1/2, 1}|E| is because the solution x of
2
C-LP is always half integral . Note that x ?
/ {0, 1/2, 1}|E| occurs when BP fails to find the solution
to the current MWM-LP.
We compare CP-BP and CP-LP in order to gauge the effectiveness of BP as an LP solver for MWM
problems. We conducted experiments on two types of synthetically generated problems: 1) Sparse
Graph instances; and 2) Triangulation instances. The sparse graph instances were generated by
forming a complete graph on |V | = {50, 100} nodes and independently eliminating edges with
probability p = {0.5, 0.9}. Integral weights, drawn uniformly in [1, 220 ], are assigned to the remaining edges. The triangulation instances were generated by randomly placing |V | = {100, 200}
points in the 220 ? 220 square and computing a Delaunay triangulation on this set of points. Edge
weights were set to the rounded Euclidean distance between two points. A set of 100 instances were
generated for each setting of |V | and CP-BP was run for T = 100 iterations.
The results are summarized in Table 1 and show that: 1) CP-BP is almost as good as CP-LP for
solving the MWM problem; and 2) our graphical transformation allows BP to solve significantly
more MWM problems than are solvable by BP run on the ?bare? LP without odd-sized cycles.
50 % sparse graphs
|V | / |E|
50 / 490
100 / 1963
Algorithm
CP-BP
CP-LP
# CP-BP
94 %
92 %
# Tight LPs
65 %
48 %
90 % sparse graphs
|V | / |E|
50 / 121
100 / 476
# CP-LP
98 %
95 %
# CP-BP
90 %
63 %
# Tight LPs
59 %
50 %
# CP-LP
91 %
63 %
Triangulation, |V | = 100, |E| = 285
Triangulation, |V | = 200, |E| = 583
# Correct / # Converged
33 / 36
34 / 100
# Correct / # Converged
11 / 12
15 / 100
Time (sec)
0.2 [0.0,0.4]
0.1 [0.0,0.3]
Time (sec)
0.9 [0.2,2.5]
0.8 [0.3,1.6]
Table 1: Evaluation of CP-BP and CP-LP on random MWM instances. Columns # CP-BP and # CP-LP indicate the percentage of instances
in which the cutting plane methods found a MWM. The column # Tight LPs indicates the percentage for which the initial MWM-LP is tight
(i.e. C = ?). # Correct and # Converged indicate the number of correct matchings and number of instances in which CP-BP converged upon
termination, but we failed to find a non-intersecting odd-sized cycle. The Time column indicates the mean [min,max] time.
2
A proof of 12 -integrality, which we did not find in the literature, is presented in our technical report [21].
8
References
[1] J. Yedidia, W. Freeman, and Y. Weiss, ?Constructing free-energy approximations and generalized belief propagation algorithms,? IEEE Transactions on Information Theory, vol. 51, no. 7,
pp. 2282 ? 2312, 2005.
[2] T. J. Richardson and R. L. Urbanke, Modern Coding Theory.
2008.
Cambridge University Press,
[3] M. Mezard and A. Montanari, Information, physics, and computation, ser. Oxford Graduate
Texts. Oxford: Oxford Univ. Press, 2009.
[4] M. J. Wainwright and M. I. Jordan, ?Graphical models, exponential families, and variational
inference,? Foundations and Trends in Machine Learning, vol. 1, no. 1, pp. 1?305, 2008.
[5] J. Gonzalez, Y. Low, and C. Guestrin. ?Residual splash for optimally parallelizing belief propagation,? in International Conference on Artificial Intelligence and Statistics, 2009.
[6] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein, ?GraphLab:
A New Parallel Framework for Machine Learning,? in Conference on Uncertainty in Artificial
Intelligence (UAI), 2010.
[7] M. Bayati, D. Shah, and M. Sharma, ?Max-product for maximum weight matching: Convergence, correctness, and lp duality,? IEEE Transactions on Information Theory, vol. 54, no. 3,
pp. 1241 ?1251, 2008.
[8] S. Sanghavi, D. Malioutov, and A. Willsky, ?Linear Programming Analysis of Loopy Belief
Propagation for Weighted Matching,? in Neural Information Processing Systems (NIPS), 2007
[9] B. Huang, and T. Jebara, ?Loopy belief propagation for bipartite maximum weight b-matching,?
in Artificial Intelligence and Statistics (AISTATS), 2007.
[10] M. Bayati, C. Borgs, J. Chayes, R. Zecchina, ?Belief-Propagation for Weighted b-Matchings
on Arbitrary Graphs and its Relation to Linear Programs with Integer Solutions,? SIAM Journal
in Discrete Math, vol. 25, pp. 989?1011, 2011.
[11] S. Sanghavi, D. Shah, and A. Willsky, ?Message-passing for max-weight independent set,? in
Neural Information Processing Systems (NIPS), 2007.
[12] D. Gamarnik, D. Shah, and Y. Wei, ?Belief propagation for min-cost network flow: convergence & correctness,? in SODA, pp. 279?292, 2010.
[13] J. Edmonds, ?Paths, trees, and flowers?, Canadian Journal of Mathematics, vol. 3, pp. 449?
467, 1965.
[14] G. Dantzig, R. Fulkerson, and S. Johnson, ?Solution of a large-scale traveling-salesman problem,? Operations Research, vol. 2, no. 4, pp. 393?410, 1954.
[15] K. Chandrasekaran, L. A. Vegh, and S. Vempala. ?The cutting plane method is polynomial for
perfect matchings,? in Foundations of Computer Science (FOCS), 2012
[16] R. G. Gallager, ?Low Density Parity Check Codes,? MIT Press, Cambridge, MA, 1963.
[17] Y. Weiss, ?Belief propagation and revision in networks with loops,? MIT AI Laboratory, Technical Report 1616, 1997.
[18] B. J. Frey, and R. Koetter, ?Exact inference using the attenuated max-product algorithm,? Advanced Mean Field Methods: Theory and Practice, ed. Manfred Opper and David Saad, MIT
Press, 2000.
[19] Y. Weiss, and W. T. Freeman, ?On the Optimality of Solutions of the MaxProduct BeliefPropagation Algorithm in Arbitrary Graphs,? IEEE Transactions on Information Theory, vol. 47,
no. 2, pp. 736?744. 2001.
[20] M. Grotschel, and O. Holland, ?Solving matching problems with linear programming,? Mathematical Programming, vol. 33, no. 3, pp. 243?259. 1985.
[21] J. Shin, A.E. Gelfand, and M. Chertkov, ?A Graphical Transformation for Belief Propagation:
Maximum Weight Matchings and Odd-Sized Cycles,? arXiv preprint arXiv:1306.1167 (2013).
9
| 4949 |@word eliminating:1 polynomial:3 termination:1 initial:1 configuration:4 recovered:1 current:4 nt:2 readily:1 koetter:1 j1:2 designed:3 update:4 n0:10 bickson:1 half:1 leaf:2 intelligence:3 plane:12 manfred:1 provides:1 math:1 node:2 successive:1 mtj:1 mathematical:1 c2:2 constructed:2 focs:1 prove:3 consists:3 ewe:2 manner:3 introduce:5 examine:1 nor:1 freeman:2 gov:1 resolve:2 solver:8 revision:1 provided:1 grotschel:1 underlying:2 notation:1 emerging:1 developed:1 finding:3 transformation:9 guarantee:4 zecchina:1 every:3 exactly:1 demonstrates:1 hit:4 ser:1 farthest:1 before:1 engineering:2 frey:1 oxford:3 path:16 approximately:1 ap:1 studied:1 dantzig:1 suggests:2 challenging:1 ease:1 zi0:1 range:1 graduate:1 practical:1 unique:9 union:2 practice:1 procedure:1 shin:2 empirical:1 significantly:1 matching:10 word:1 induce:1 suggest:1 interior:1 clever:1 close:1 restriction:1 equivalent:1 map:23 center:1 straightforward:1 go:2 independently:1 contradiction:1 insight:1 rule:2 utilizing:1 importantly:1 classic:1 proving:1 fulkerson:1 analogous:1 construction:10 gm:48 suppose:1 play:1 exact:2 programming:9 designing:1 associate:2 trend:1 jk:1 cut:2 role:1 preprint:1 electrical:1 cycle:23 decrease:6 broken:1 dynamic:1 tight:16 solving:7 upon:2 division:2 bipartite:1 matchings:9 triangle:1 joint:3 e00:3 represented:1 univ:1 describe:3 artificial:3 gelfand:2 heuristic:6 kaist:1 solve:2 tightness:5 otherwise:15 statistic:2 richardson:1 chayes:1 sequence:3 propose:1 product:7 j2:2 uci:1 loop:2 los:3 convergence:10 empty:3 optimum:1 parent:8 perfect:2 converges:8 guaranteeing:1 leave:5 wider:1 help:1 andrew:1 ac:1 derive:1 ij:10 odd:16 solves:1 strong:2 auxiliary:2 implies:2 indicate:2 correct:9 broke:1 require:1 argued:1 fix:1 suffices:1 preliminary:1 summation:1 yij:4 m0i:1 hold:4 considered:1 ic:13 mapping:2 m0:9 uniqueness:3 combinatorial:2 correctness:8 gauge:1 weighted:2 hope:1 mit:3 always:2 rather:1 pn:1 factorizes:1 encode:1 focus:1 properly:1 kyrola:1 check:3 indicates:2 posteriori:2 inference:6 initially:3 relation:1 issue:2 among:1 arg:1 special:2 lp0:7 initialize:1 marginal:1 field:2 construct:3 equal:1 never:3 y6:1 flipped:8 placing:1 simplex:1 sanghavi:4 report:6 modern:1 randomly:1 national:2 replaced:1 consisting:1 maintain:2 n1:10 organization:1 centralized:1 message:7 highly:1 evaluation:1 pc:5 edge:22 integral:9 necessary:4 korea:2 respective:2 tree:11 euclidean:1 urbanke:1 desired:1 e0:12 theoretical:5 nij:1 instance:10 we0:6 column:3 zj0:1 assignment:17 loopy:4 cost:1 republic:1 vertex:4 subset:2 alamo:3 conducted:1 johnson:1 optimally:1 reported:2 dependency:1 density:1 international:1 siam:1 physic:1 rounded:1 michael:1 connecting:1 intersecting:7 nm:1 satisfied:8 choose:2 possibly:1 huang:1 collapsing:2 return:1 potential:1 summarized:1 sec:2 coding:1 root:3 lab:1 parallel:2 formed:1 ni:2 square:1 who:1 efficiently:1 correspond:1 identify:1 malioutov:3 converged:4 whenever:1 ed:1 failure:1 energy:1 pp:9 thereof:1 associated:12 mi:1 recovers:1 proof:17 irvine:2 proved:1 popular:3 w23:1 knowledge:1 carefully:2 manuscript:1 dt:8 specify:1 wei:4 formulation:5 done:1 though:2 furthermore:3 mwm:39 traveling:1 wmax:6 nonlinear:1 lack:1 propagation:14 scientific:1 believe:1 usa:2 reconstructible:1 ye:32 true:1 hence:4 assigned:2 alternating:1 laboratory:2 round:1 during:1 generalized:1 stress:1 complete:2 performs:1 cp:20 tmap:6 reasoning:1 variational:1 novel:2 recently:3 fi:2 gamarnik:1 common:3 mt:4 empirically:1 extend:1 m1:10 significant:1 cambridge:2 ai:1 mathematics:1 base:3 add:5 delaunay:1 showed:1 triangulation:5 inf:1 reverse:1 certain:1 inequality:3 binary:2 success:1 xe:17 guestrin:2 converge:1 sharma:1 xe0:1 ii:2 technical:3 offer:1 long:1 j3:1 involving:2 arxiv:2 iteration:2 c1:2 remarkably:1 else:3 completes:3 parallelization:1 saad:1 induced:1 undirected:1 flow:2 effectiveness:1 jordan:1 call:3 integer:1 synthetically:1 canadian:1 easy:4 enough:3 zi:15 opposite:1 reduce:1 attenuated:1 grad:1 motivated:1 passing:3 remark:1 useful:2 induces:2 percentage:2 brother:2 disjoint:5 popularity:1 per:1 edmonds:1 discrete:2 write:1 vol:8 key:1 drawn:1 changing:1 neither:1 integrality:4 graph:21 relaxation:16 mti:1 sum:1 vegh:1 run:5 uncertainty:1 soda:1 tailor:1 extends:1 throughout:1 reader:1 arrive:1 almost:1 family:1 chandrasekaran:1 w12:1 gonzalez:2 guaranteed:1 convergent:1 correspondence:1 constraint:4 bp:74 min:2 optimality:1 vempala:1 department:2 structured:3 maxn:1 contradicts:1 lp:72 making:2 modification:2 pr:5 describing:1 flip:3 end:5 salesman:1 operation:4 yedidia:1 observe:2 hellerstein:1 appropriate:1 shah:3 original:4 denotes:2 remaining:1 include:1 ensure:1 graphical:12 cmax:1 xc:4 build:1 establish:4 crafting:1 g0:3 added:1 beliefpropagation:1 flipping:4 occurs:1 primary:1 traditional:4 distance:2 w0:21 polytope:2 trivial:1 reason:3 willsky:4 code:1 ellipsoid:1 unfortunately:4 negative:1 design:5 implementation:2 motivates:4 ye0:4 defining:1 dc:8 arbitrary:3 jebara:1 parallelizing:1 david:1 namely:2 required:2 lanl:1 california:1 herein:1 established:2 nip:2 beyond:1 flower:1 yc:9 program:1 max:12 including:2 belief:18 wainwright:1 critical:1 natural:1 force:1 indicator:1 solvable:1 residual:1 advanced:3 scheme:1 technology:1 ne:2 ready:1 bare:1 text:1 understanding:1 literature:1 loss:1 remarkable:1 bayati:2 foundation:2 degree:2 sufficient:2 nte:17 translation:1 placed:1 repeat:2 truncation:1 copy:6 free:1 parity:1 blossom:5 institute:1 neighbor:1 characterizing:1 sparse:4 distributed:3 depth:1 xn:1 valid:2 opper:1 author:1 collection:2 made:1 employing:1 transaction:3 cutting:12 graphlab:1 uai:1 search:1 iterative:2 why:1 yic:7 table:2 terminate:3 ca:1 improving:1 constructing:1 did:2 aistats:1 main:3 montanari:1 w15:1 nothing:2 child:7 contracted:1 referred:1 aid:1 fails:1 position:1 mezard:1 exponential:1 candidate:1 chertkov:3 removing:1 theorem:11 specific:1 borgs:1 derives:1 exists:9 sequential:2 adding:3 kr:1 jinwoos:1 te:14 splash:1 gap:3 jinwoo:1 simply:1 likely:1 forming:1 gallager:1 failed:1 holland:1 satisfies:6 ma:1 conditional:1 sized:11 careful:1 feasible:3 hard:1 daejeon:1 uniformly:1 lemma:23 called:4 duality:1 maxproduct:2 ew:1 maxe:1 quest:1 evaluate:1 |
4,364 | 495 | Best-First Model Merging for
Dynamic Learning and Recognition
Stephen M. Omohundro
International Computer Science Institute
1947 CenteJ' Street, Suite 600
Berkeley, California 94704
Abstract
"Best-first model merging" is a general technique for dynamically
choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both leaming and recognition tasks
and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and
constraint surface modelling, and choosing the structure of a balltree or
bumptree to maximize efficiency of access.
1 TOWARD MORE COGNITIVE LEARNING
Standard backpropagation neural networks learn in a way which appears to be quite different from human leaming. Viewed as a cognitive system, a standard network always maintains a complete model of its domain. This model is mostly wrong initially, but gets
gradually better and better as data appears. The net deals with all data in much the same
way and has no representation for the strength of evidence behind a certain conclusion. The
network architecture is usually chosen before any data is seen and the processing is much
the same in the early phases of learning as in the late phases.
Human and animalleaming appears to proceed in quite a different manner. When an organism has not had many experiences in a domain of importance to it, each individual experience is critical. Rather than use such an experience to slightly modify the parameters of a
global model, a better strategy is to remember the experience in detail. Early in learning. an
organism doesn't know which features of an experience are important unless it has a strong
958
Best-First Model Merging for Dynamic Learning and Recognition
prior knowledge of the domain. Without such prior knowledgeJts best strategy is to generalize on the basis of a similarity measure to individual stored experiences. (Shepard, 1987)
shows that there is a universal exponentially decaying form for this kind of similarity based
generalization over a wide variety of sensory domains in several studied species. As experiences accumulate, the organism eventually gets enough data to reliably validate models
from complex classes. At this point the animal need 00 longer remember individual experiences, but rather only the discovered generalities (eg. as rules). With such a strategy, it is
possible for a system to maintain a measure of confidence in it its predictions while building ever more complex models of its environment.
Systems based on these two types of learning have also appeared in the neural network, statistics and machine learning communities. In the learning literature one finds both "tablelookup" or "memory-based" methods and ''parameter-fitting" methods. In statistics the dislinction is made between "non-parametric" and "parametric" methods. Table-lookup methods work by storing examples and generalize to new situations on the basis of similarity to
the old ones. Such methods are capable of one-shot learning and have a measure of the applicability of their knowledge to new situations but are limited in their generalization capability. Parameter fitting models choose the parameters of a predetermined model to best fit
a set of examples. They usually take longer to train and are susceptible to computational
difficulties such as local maxima but can potentially generalize better by extending the influence of examples over the whole space. Aside from computational difficulties, their fundamental problem is overfitting, ie. having insufficient data to validate a particular
parameter setting as useful for generalization.
2 OVERFITTING IN LEARNING AND RECOGNITION
There have been many recent results (eg. based on the Vapnik-Chervonenkis dimension)
which identify the number of examples needed to validate choices made from specific parametric model families. We would like a learning system to be able to induce extremely
complex models of the world but we don't want to have to present it with the enormous
amount of data needed to validate such a model unless it is really needed. (Vapnik, 1982)
proposes a technique for avoiding overfitling while allowing models of arbitrary complexity. The idea is to start with a nested familty of model spaces, whose members contain ever
more complex models. When the system has only a small amount of data it can only validate models in in the smaller model classes. As more data arrives, however, the more complex classes may be considered. If at any point a fit is found to within desired tolerances,
however, only the amount of data needed by the smallest class containing the chosen model
is needed. Thus there is the potential for choosing complex models without penalizing situations in which the model is simple. The model merging approach may be viewed in these
terms except that instead of a single nested family, there is a widely branching tree of model
spaces.
Like learning, recognition processes (visual, auditory, etc.) aim at constructing models
from data. As such they are subject to the same considerations regarding overfitling. Figure
1 shows a perceptual example where a simpler model (a single segment) is perceptually
chosen to explain the data (4 almost collinear dots) than a more complex model (two segments) which fits the data better. An intuitive explanations is that if the dots were generated
by two segments, it would be an amazing coincidence that they are almost collinear, if it
were generated by one, that fact is easily explained. Many of the Gestalt phenomena can be
959
960
Omohundro
considered in the same tenns. Many of the processes used in recognition (eg. segmentation,
grouping) have direct analogs in learning and vice versa.
Is:.
?
? ?
?
=
_______ or
Figure 1: An example of Occam's razor in recognition.
There has been much recent interest in the network community in Bayesian methods for
model selection while avoiding overfilling (eg. Buntine and Weigend, 1992 and MacKay
1992). Learning and recognition fit naturally together in a Bayesian framework. The Bayesian approach makes explicit the need for a prior distribution. The posterior distribution
generated by learning becomes the prior distribution for recognition. The model merging
process described in this paper is applicable to both phases and the knowledge representation it suggests may be used for both processes as well.
There are at least three properties of the world that may be encoded in a prior distribution
and have a dramatic effect on learning and recognition and are essential to the model merging approach. The continuity prior is that the world is geometric and unless there is contrary
data a system should prefer continuous models over discontinuous ones. This prior leads to
a wide variety of what may be called "geometric learning algorithms.. (Omohundro, 1990).
The sparseness prior is that the world is sparsely interacting. This says that probable models naturally decompose into components which only directly affect one another in a sparse
manner. The primary origin of this prior is that physical objects usually only directly affect
nearby objects in space and time. This prior is responsible for the success of representations
such as Markov random fields and Bayesian networks which encode conditional independence relations. Even if the individual models consist of sparsely interacting components,
it still might be that the data we receive for learning or recognition depends in an intricate
way on all components. The locality prior prefers models in which the data decomposes
into components which are directly affected by only a small number of model components.
For example, in the learning setting only a small portion of the knowledge base will be relevant to any specific situation. In the recognition setting, an individual pixel is detennined
by only a small number of objects in the scene. In geometric settings, a localized representation allows only a small number of model parameters to affect any individual prediction.
3 MODEL MERGING
Based on the above considerations, an ideal learning or recognition system should model
the world using a collection of sparsely connected, smoothly parameterized, localized models. This is an apt description of many of the neural network models currently in use. Bayesian methods provide an optimal means for induction with such a choice of prior over
models but are computationally intractable in complex situations. We would therefore like
to develop heuristic approaches which approximate the Bayesian solution and avoid overfitting. Based on the idealization of animal learning in the frrst section, we would like is a
system which smoothly moves between a memory-based regime in which the models are
the data into ever more complex parameterized models. Because of the locality prior, model
Best-First Model Merging for Dynamic Learning and Recognition
components only affect a subset of the data. We can therefore choose the complexity of
components which are relevant to different portions of the data space according to the data
which has been received there. This allows for reliably validated models of extremely high
complexity in some regions of the space while other portions are modeled with low complexity. If only a small number of examples have been seen in some region, these are simply
remembered and generalization is based on similarity. As more data arrives, if regularities
are found and there is enough data present to justify them. more complex parameterized
models are incorpoolted.
There are many possible approaches to implementing such a strategy. We have investigated
a particular heuristic which can be made computationally efficient and appears to work well
in a variety of areas. The best-first model merging approach is applicable in a variety of situations in which complex models are constructed by combining simple ones. The idea is to
improve a complex model by replacing two of its component models by a single model.
This "merged" model may be in the same family as the original components. More interestingly. because the combined data from the merged components is used in determining
the parameters of the merged model, it may come from a larger parameterized class. The
critical idea is to never allow the system to hypothesize a model which is more complex
than can be justified by the data it is based on. The "best-first" aspect is to always choose
to merge the pair of models which decrease the likelihood of the data the least. The merging
may be stopped according to a variety of criteria which are now applied to individual model
components rather than the entire model. Examples of such criteria are those based on
cross-validation, Bayesian Occam factors, VC bounds, etc. In experiments in a variety of
domains, this approach does an excellent job of discovering regularities and allocating
modelling resources efficiently.
3 MODEL MERGING VS. K?MEANS FOR RBF'S
Our rust example is the problem of choosing centers in radial basis function networks for
approximating functions. In the simplest approach, a radial basis function (eg. a Gaussian)
is located at each training input location. The induced function is a linear combination of
these basis functions which minimizes the mean square error of the training examples. Better models may be obtained by using fewer basis functions than data points. Most work on
choosing the centers of these functions uses a clustering technique such as k-means (eg.
Moody and Darken, 1989). This is reasonable because it puts the representational power of
the model in the regions of highest density where errors are more critical. It ignores the
structure of the modelled function, however. The model merging approach starts with a basis function at each training point and successively merges pairs which increase the training
error the least. We compared this approach with the k-means approach in a variety of circumstances.
Figure 2 shows an example where the function on the plane to be learned is a sigmoid in x
centered at 0 and is constant in y. Thus the function varies most along the y axis. The data
is drawn from a Gaussian distribution which is centered at (-.5,0). 21 training samples were
drawn from this distribution and from these a radial basis function network with 6 Gaussian
basis functions was learned. The X's in the figure show the centers chosen by k-means. As
expected, they are clustered near the center fo the Gaussian source distribution. The triangles show the centers chosen by best-rust model merging. While there is some tendency to
focus on the source center, there is also a tendency to represent the region where the modelled function varies the most. The training error is over 10 times less with model merging
961
962
Omohundro
and the test error 00 an independent test set is about 3 times lower. These results were typical in variety of test runs. This simple example shows one way in which underlying structme is natmally discovered by the merging technique.
1.5
X
0.5
?
y
(..
..!.................._.........
"'X;"
? ............\
? ? ? A. X
'.... ? x.
"I.
-0.5
Dots are training points
Triangles are mm centers
x's are k-means centers
21 samples, 6 centers
RBF width .4
: Gaussian width .4
Gaussian center -.5
:, Sigmoid width .4
:
A:
:
:
.?
?
>J
A
??????...................._~~.!!!!n...???.... Sigmoid
?
G
?
,."
~
? ...????""???.......~..?...??????...???????????IIII..
X
-1.5
-1.0
-0.6
-0.2
0.2
x
Figure 2: Radial basis function centers in two dimensions chosen by model
merging and by k-means. The dots show the 21 training samples. The x's are
the centers chosen by k-means, the triangles by model merging. The training
error was .008098 for k-means and .000604 for model merging. The test error
was .012463 for k-means and .004638 for model merging.
4 APPROXIMATING CURVES AND SURFACES
As a second intuitive example, consider the problem of modelling a curve in the plane by
a combination of straight line segments. The error function may be taken as the mean
square error over each curve point to the nearest segment point A merging step in this case
consists of replacing two segments by a single segment We always choose that pair such
that the merged segment increases the emr the least. Figure 3 shows the approximations
generated by this strategy. It does an excellent job at identifying the essentially linear portions of the curve and puts the boundaries between component models at the "comers". The
corresponding "top-down" approach would start with a single segment and repeatedly split
it This approach sometimes has to make decisions too early and often misses the comers
in the curve. While not shown in the figure, as repeated mergings take place, more data is
available for each segment This would allow us to use more complex models than linear
segments such as Bezier curves. It is possible to reliably induce a representation which is
linear in some portions and higher order in others. Such models potentially have many parameters and would be subject to overfitting if they were learned directly rather than by going through merge steps.
Exactly the same strategy may be applied to modelling higher-dimensional constraint surfaces by hyperplanes or functions by piecewise linear portions. The model merging ap-
Best-First Model Merging for Dynamic Learning and Recognition
proach naturally complements the efficient mapping and constraint surface representations
described in (Omohundro, 1991) based on bumptrees.
Error=1
Error=2
Error=5
Error=10
Error=20
Figure 3: Approximation of a curve by best-rust merging of segment models. The top row
shows the endpoints chosen by the algorithm at various levels of allowed error. The
bottom row shows the corresponding approximation to the curve.
Notice, in this example, that we need only consider merging neighboring segments as the
increased error in merging non-adjoinging segments would be too great This imposes a locality on the problem which allows for extremely efficient computation. The idea is to
maintain a priority queue with all potential merges on it ordered by the increase in error
caused by the merge. This consists of only the neighboring pairs (of which there are n-l if
there are n segments). The top pair on the queue is removed and the merge operation it represents is performed if it doesn't violate the stopping critera. The other potential merge
pairs which incorporated the merged segments must be removed from the queue and the
new possible mergings with the generated segment must be inserted (alternatively, nothing
need be removed and each pair is checked for viability when it reaches the top of the
queue). The neighborhood structure allows each of the operations to be performed quickly
with the appropriate data structures and the entire merging process takes a time which is
linear (or linear times logarithmic) in the number of component models. Complex timevarying curves may easily be processed in real time on typical workstations. In higher dimensions. hierarchical geometric data structures (as in Omohundro. 1987, 1990) allow a
similar reduction in computation based on locality.
963
964
Omohundro
S BALLTREE CONSTRUCTION
The model merging approach is applicable to a wide variety of adaptive structures. The
"balItree" structure described in (Omohundro. 1989) provides efficient access to regions in
geometric spaces. It consists of a nested hiernrchy of hyper-balls surrounding given leaf
balls and effICiently supports querries which test for intersection. inclusion. or nearness to
a leaf ball. The balItree construction algorithm itself provides an example of a best-first
merge approach in a higher dimensional space. To detennine the best hierarchy we can
merge the leaf balls pairwise in such a way that the total volume of all the merged regions
is as small as possible. The figure compares the quality of balltrees constructed using bestflJ'St merging to those constructed using top-down and incremental algorithms. As in other
domains. the top-down approach has to make major decisions too early and often makes
suboptimal choices. The merging approach only makes global decisions after many local
decisions which adapt well to the structure.
Balltree Error
1074.48
,.,
644.69
429.79
214.90
Iry
1\
859.58
,;
I
".
/..
~ ~
J"
i
Top Down Construction
"V
Incremental Construction
J
-
/"
;.st-filSt Merge Construction
0.00 l~~'~~~';-=-;:~::~:;~~
o 100 200 300 400 500
Number of Balls
Figme 4: Balltree error as a function of number of balls for the top-down. incremental. and
best-fust merging construction methods. Leaf balls have uniformly distributed centers in
5 dimensions with radii uniformly distributed less than .1.
6 CONCLUSION
We have described a simple but powerful heuristic for dynamically building models for
both learning and recognition which constructs complex models that adapt well to the underlying structure. We presented three different examples which only begin to touch on the
possibilities. To hint at the broad applicability, we will briefly describe several other applications we are currently examining.
In (Omohundro. 1991) we presented an efficient structure for modelling mappings based
on a collection of local mapping models which were combined according to a partition of
unity formed by "influence functions" associated with each model. This representation is
very flexible and can be made computationally efficient. While in the experiments of that
paper, the local models were affme functions (constant plus linear). they may be chosen
from any desired class. The model merging approach builds such a mapping representation
by successively merging models and replacing them with a new model whose influence
Best-First Model Merging for Dynamic Learning and Recognition
function extends over the range of the two original influence functions. Because it is based
on more data, the new model can be chosen from a larger complexity class of functions than
the originals.
One of the most fundamental inductive tasks is density estimation. ie. estimating a probablity distribution from samples drawn from it. A powerful standard technique is adaptvie
kernel estimation in which a nonnalized Gaussian (or other kernel) is placed at each sample
point with a width determined by the local sample density (Devroye and Gyorfi. 1985).
Model merging can be applied to improve the generalization performance of this approach
by choosing successively more complex component densities once enough data has accumulated by merging. For example. consider a density supported on a curve in a high dimensi0l13l space. Initially the estimate will consist of radially-symmetric Gaussians at each
sample point. After successive mergings, however, the one-dimensional linear structure
can be discovered (and the Gaussian components be chosen from the larger class of extended Gaussians) and the generalization dramatically improVed.
Other natural areas of application include inducing the structure of hidden Markov models.
stochastic context-free grammars, Markov random fields. and Bayesian networks.
References
D. H. Ballard and C. M. Brown. (1982) Computer Vision. Englewood Cliffs, N. J: PrenticeHall.
W. L. Buntine and A. S. Weigend. (1992) Bayesian Back-Propagation. To appear in: Complex Systems.
L. Devroye and L. Gyorfi. (1985) Nonparametric Density Estimation: The Ll View, New
York: Wiley.
D. J. MacKay. (1992) A Practical Bayesian Framework for Backprop Networks. Caltech
preprint.
1. Moody and C. Darken. (1989) Fast learning in networks of locally-tuned processing
units. Neural Computation, 1,281-294.
S. M. Omohundro. (1987) Efficient algorithms with neural network behavior. Complex
Systems 1:273-347.
S. M. Omohundro. (1989) Five balltree construction algorithms. International Computer
Science Institute Technical Report TR-89-063.
S. M. Omohundro. (1990) Geometric learning algorithms. Physica D 42:307-321.
S. M. Omohundro. (1991) Bumptrees for Efficient Function, Constraint, and Oassification
Learning. In Lippmann, Moody, and Touretzky. (eds.) Advances in Neural Information
Processing Systems 3. San Mateo, CA: Morgan Kaufmann Publishers.
R. N. Shepard. (1987) Toward a universal law of generalization for psychological science.
Science.
V. Vapnik. (1982) Estimation of Dependences Based on Empirical Data, New York:
Springer-Verlag.
965
PART
XIII
ARCHITECTURES
AND ALGORITHMS
| 495 |@word briefly:1 dramatic:1 tr:1 shot:1 reduction:1 chervonenkis:1 tuned:1 interestingly:1 must:2 partition:1 predetermined:1 hypothesize:1 aside:1 v:1 discovering:1 fewer:1 leaf:4 plane:2 nearness:1 probablity:1 provides:2 location:1 successive:1 hyperplanes:1 simpler:1 five:1 along:1 constructed:3 direct:1 consists:3 fitting:2 manner:2 pairwise:1 expected:1 intricate:1 behavior:1 becomes:1 begin:1 estimating:1 underlying:2 what:1 kind:1 minimizes:1 suite:1 berkeley:1 remember:2 exactly:1 wrong:1 unit:1 appear:1 before:1 local:6 modify:1 bumptrees:2 cliff:1 merge:8 ap:1 might:1 plus:1 studied:1 mateo:1 dynamically:2 suggests:1 limited:1 range:1 gyorfi:2 practical:1 responsible:1 backpropagation:1 area:2 universal:2 empirical:1 significantly:1 confidence:1 radial:5 induce:2 get:2 selection:1 put:2 context:1 influence:4 center:13 identifying:1 rule:1 construction:7 hierarchy:1 oassification:1 us:1 origin:1 recognition:17 located:1 sparsely:3 bottom:1 inserted:1 preprint:1 coincidence:1 region:6 connected:1 decrease:1 highest:1 removed:3 environment:1 complexity:5 dynamic:5 segment:17 efficiency:1 basis:11 triangle:3 comer:2 easily:2 various:1 surrounding:1 train:1 fast:1 describe:1 hyper:1 choosing:8 neighborhood:1 quite:2 whose:2 widely:1 encoded:1 heuristic:3 say:1 larger:3 grammar:1 statistic:2 itself:1 net:1 neighboring:2 relevant:2 combining:1 detennined:1 representational:1 intuitive:2 description:1 validate:5 frrst:1 inducing:1 regularity:2 extending:1 incremental:3 object:3 develop:1 amazing:1 nearest:1 received:1 job:2 strong:1 come:1 radius:1 merged:6 discontinuous:1 stochastic:1 vc:1 centered:2 human:2 implementing:1 backprop:1 generalization:7 really:1 decompose:1 clustered:1 probable:1 physica:1 mm:1 considered:2 great:1 mapping:4 major:1 early:4 smallest:1 estimation:4 applicable:4 currently:2 vice:1 always:3 gaussian:8 aim:1 rather:4 avoid:1 bumptree:1 timevarying:1 encode:1 validated:1 focus:1 modelling:5 likelihood:1 stopping:1 accumulated:1 entire:2 initially:2 fust:1 relation:1 hidden:1 going:1 pixel:1 flexible:1 proposes:1 animal:2 mackay:2 field:2 construct:1 never:1 having:1 once:1 represents:1 broad:1 others:1 report:1 piecewise:1 hint:1 xiii:1 individual:7 phase:3 maintain:2 interest:1 englewood:1 possibility:1 arrives:2 behind:1 allocating:1 capable:1 experience:8 unless:3 tree:1 old:1 desired:2 stopped:1 psychological:1 increased:1 applicability:2 subset:1 examining:1 too:3 buntine:2 stored:1 varies:2 combined:2 st:2 density:6 international:2 fundamental:2 ie:2 together:1 quickly:1 moody:3 prenticehall:1 successively:3 containing:1 choose:4 priority:1 cognitive:2 potential:3 lookup:1 caused:1 depends:1 performed:2 view:1 portion:6 start:3 decaying:1 maintains:1 capability:1 square:2 figme:1 formed:1 kaufmann:1 efficiently:2 identify:1 generalize:3 modelled:2 bayesian:10 straight:1 explain:1 fo:1 reach:1 touretzky:1 ed:1 checked:1 naturally:3 associated:1 workstation:1 auditory:1 radially:1 knowledge:4 segmentation:1 back:1 appears:4 higher:4 improved:1 generality:1 replacing:3 touch:1 propagation:1 continuity:1 quality:1 building:2 effect:1 contain:1 brown:1 inductive:1 symmetric:1 deal:1 eg:6 ll:1 branching:1 width:4 razor:1 criterion:2 omohundro:13 demonstrate:1 complete:1 consideration:2 sigmoid:3 physical:1 rust:3 endpoint:1 exponentially:1 shepard:2 volume:1 balltrees:1 analog:1 organism:3 accumulate:1 versa:1 inclusion:1 had:1 dot:4 access:2 similarity:4 surface:4 longer:2 etc:2 base:1 posterior:1 recent:2 certain:1 verlag:1 tenns:1 success:1 remembered:1 caltech:1 seen:2 morgan:1 maximize:1 stephen:1 violate:1 technical:1 adapt:2 cross:1 prediction:2 circumstance:1 essentially:1 vision:1 represent:1 sometimes:1 kernel:2 receive:1 justified:1 want:1 iiii:1 source:2 publisher:1 subject:2 induced:1 member:1 contrary:1 near:1 ideal:1 split:1 enough:3 viability:1 variety:9 affect:4 fit:4 independence:1 architecture:3 suboptimal:1 regarding:1 idea:4 collinear:2 queue:4 proceed:1 york:2 prefers:1 repeatedly:1 dramatically:1 useful:1 amount:3 nonparametric:1 locally:1 processed:1 simplest:1 notice:1 proach:1 affected:1 enormous:1 drawn:3 penalizing:1 idealization:1 weigend:2 run:1 parameterized:4 powerful:2 place:1 family:3 almost:2 reasonable:1 extends:1 decision:4 prefer:1 bound:1 strength:1 constraint:4 scene:1 nearby:1 aspect:1 extremely:3 according:3 combination:2 ball:7 smaller:1 slightly:1 unity:1 explained:1 gradually:1 taken:1 computationally:3 resource:1 eventually:1 needed:5 know:1 generalizes:1 available:1 operation:2 detennine:1 gaussians:2 hierarchical:1 appropriate:1 apt:1 original:3 top:8 clustering:1 include:1 build:1 approximating:2 move:1 strategy:6 parametric:3 primary:1 dependence:1 balltree:5 street:1 toward:2 induction:1 devroye:2 modeled:1 insufficient:1 mostly:1 susceptible:1 potentially:2 reliably:3 allowing:1 darken:2 markov:3 situation:6 extended:1 ever:3 incorporated:1 nonnalized:1 discovered:3 interacting:2 arbitrary:1 community:2 complement:1 pair:7 california:1 merges:2 learned:3 able:1 usually:3 appeared:1 regime:1 memory:2 explanation:1 power:1 critical:3 difficulty:2 natural:1 improve:2 axis:1 prior:13 literature:1 geometric:6 determining:1 law:1 localized:2 validation:1 affine:1 imposes:1 storing:1 occam:2 row:2 placed:1 supported:1 free:1 allow:3 institute:2 wide:3 sparse:1 tolerance:1 distributed:2 curve:11 dimension:4 boundary:1 world:5 doesn:2 sensory:1 ignores:1 made:4 collection:2 adaptive:1 san:1 gestalt:1 approximate:1 lippmann:1 global:2 overfitting:5 alternatively:1 don:1 continuous:1 decomposes:1 table:1 learn:1 ballard:1 ca:1 investigated:1 complex:19 excellent:2 constructing:1 domain:6 whole:1 nothing:1 repeated:1 allowed:1 wiley:1 explicit:1 perceptual:1 late:1 down:5 specific:2 evidence:1 grouping:1 essential:1 consist:2 intractable:1 vapnik:3 merging:38 importance:1 perceptually:1 sparseness:1 locality:4 smoothly:2 intersection:1 logarithmic:1 simply:1 visual:1 ordered:1 springer:1 nested:3 conditional:1 viewed:2 rbf:2 leaming:2 typical:2 except:1 uniformly:2 determined:1 justify:1 miss:1 called:1 specie:1 total:1 tendency:2 support:1 avoiding:3 phenomenon:1 |
4,365 | 4,950 | Sensor Selection in High-Dimensional
Gaussian Trees with Nuisances
Jonathan P. How
MIT LIDS
[email protected]
Daniel Levine
MIT LIDS
[email protected]
Abstract
We consider the sensor selection problem on multivariate Gaussian distributions
where only a subset of latent variables is of inferential interest. For pairs of vertices connected by a unique path in the graph, we show that there exist decompositions of nonlocal mutual information into local information measures that can
be computed efficiently from the output of message passing algorithms. We integrate these decompositions into a computationally efficient greedy selector where
the computational expense of quantification can be distributed across nodes in the
network. Experimental results demonstrate the comparative efficiency of our algorithms for sensor selection in high-dimensional distributions. We additionally
derive an online-computable performance bound based on augmentations of the
relevant latent variable set that, when such a valid augmentation exists, is applicable for any distribution with nuisances.
1
Introduction
This paper addresses the problem of focused active inference: selecting a subset of observable random variables that is maximally informative with respect to a specified subset of latent random
variables. The subset selection problem is motivated by the desire to reduce the overall cost of
inference while providing greater inferential accuracy. For example, in the context of sensor networks, control of the data acquisition process can lead to lower energy expenses in terms of sensing,
computation, and communication [1, 2].
In many inferential problems, the objective is to reduce uncertainty in only a subset of the unknown
quantities, which are related to each other and to observations through a joint probability distribution
that includes auxiliary variables called nuisances. On their own, nuisances are not of any extrinsic
importance to the uncertainty reduction task and merely serve as intermediaries when describing
statistical relationships, as encoded with the joint distribution, between variables. The structure in
the joint can be represented parsimoniously with a probabilistic graphical model, often leading to
efficient inference algorithms [3, 4, 5]. However, marginalization of nuisance variables is potentially
expensive and can mar the very sparsity of the graphical model that permitted efficient inference.
Therefore, we seek methods for selecting informative subsets of observations in graphical models
that retain nuisance variables.
Two primary issues arise from the inclusion of nuisance variables in the problem. Observation
random variables and relevant latent variables may be nonadjacent in the graphical model due to
the interposition of nuisances between them, requiring the development of information measures
that extend beyond adjacency (alternatively, locality) in the graph. More generally, the absence of
certain conditional independencies, particularly between observations conditioned on the relevant
latent variable set, means that one cannot directly apply the performance bounds associated with
submodularity [6, 7, 8].
1
In an effort to pave the way for analyzing focused active inference on the class of general distributions, this paper specifically examines multivariate Gaussian distributions ? which exhibit a number
of properties amenable to analysis ? and later specializes to Gaussian trees. This paper presents
a decomposition of pairwise nonlocal mutual information (MI) measures on Gaussian graphs that
permits efficient information valuation, e.g., to be used in a greedy selection. Both the valuation
and subsequent selection may be distributed over nodes in the network, which can be of benefit for
high-dimensional distributions and/or large-scale distributed sensor networks. It is also shown how
an augmentation to the relevant set can lead to an online-computable performance bound for general
distributions with nuisances.
The nonlocal MI decomposition extensively exploits properties of Gaussian distributions, Markov
random fields, and Gaussian belief propagation (GaBP), which are reviewed in Section 2. The formal
problem statement of focused active inference is stated in Section 3, along with an example that
contrasts focused and unfocused selection. Section 4 presents pairwise nonlocal MI decompositions
for scalar and vectoral Gaussian Markov random fields. Section 5 shows how to integrate pairwise
nonlocal MI into a distributed greedy selection algorithm for the focused active inference problem;
this algorithm is benchmarked in Section 6. A performance bound applicable to any focused selector
is presented in Section 7.
2
2.1
Preliminaries
Markov Random Fields (MRFs)
Let G = (V, E) be a Markov random field (MRF) with vertex set V and edge set E. Let u and
v be vertices of the graph G. A u-v path is a finite sequence of adjacent vertices, starting with
vertex u and terminating at vertex v, that does not repeat any vertex. Let PG (u, v) denote the set
of all paths between distinct u and v in G. If |PG (u, v)| > 0, then u and v are graph connected. If
|PG (u, v)| = 1, then there is a unique path between u and v, and denote the sole element of PG (u, v)
by P?u:v .
If |PG (u, v)| = 1 for all u, v ? V, then G is a tree. If |PG (u, v)| ? 1 for all u, v ? V, then G is a
forest, i.e., a disjoint union of trees. A chain is a simple tree with diameter equal to the number of
nodes. A chain is said to be embedded in graph G if the nodes in the chain comprise a unique path
in G.
For MRFs, the global Markov property relates connectivity in the graph to implied conditional
independencies. If D ? V, then GD = (D, ED ) is the subgraph induced by D, with ED = E ?
(D ? D). For disjoint subsets A, B, C ? V, let G\B be the subgraph induced by V \ B. The global
Markov property holds that xA ?
? xC | xB iff |PG\B (i, j)| = 0 for all i ? A and j ? C.
2.2
Gaussian Distributions in Information Form
Consider a random vector x distributed according to a multivariate Gaussian distribution N (?, ?)
with mean ? and (symmetric, positive definite) covariance ? > 0. One could equivalently consider
the information form x ? N ?1 (h, J) with precision matrix J = ??1 > 0 and potential vector
h = J?, for which px (x) ? exp{? 21 xT Jx + hT x}.
One can marginalize out or condition on a subset of random variables by considering a partition of
x into two subvectors, x1 and x2 , such that
J
J12
x1
h1
x=
? N ?1
, 11
.
x2
h2
JT12 J22
In the information form, the marginal distribution over x1 is px1 (?) = N ?1 (?; h01 , J01 ), where
?1 T
0
h01 = h1 ? J12 J?1
22 h2 and J1 = J11 ? J12 J22 J12 , the latter being the Schur complement of
J22 . Conditioning on a particular realization x2 of the random subvector x2 induces the conditional
distribution px1 |x2 (x1 |x2 ) = N ?1 (x1 ; h01|2 , J11 ), where h01|2 = h1 ? J12 x2 , and J11 is exactly the
upper-left block submatrix of J. (Note that the conditional precision matrix is independent of the
value of the realized x2 .)
2
If x ? N ?1 (h, J), where h ? Rn and J ? Rn?n , then the (differential) entropy of x is [9]
1
H(x) = ? log ((2?e)n ? det(J)) .
2
(1)
Likewise, for nonempty A ? {1, . . . , n}, and (possibly empty) B ? {1, . . . , n} \ A, let J0A|B be the
precision matrix parameterizing pxA |xB . The conditional entropy of xA ? Rd given xB is
1
H(xA |xB ) = ? log((2?e)d ? det(J0A|B )).
2
(2)
The mutual information between xA and xB is
1
I(xA ; xB ) = H(xA ) + H(xB ) ? H(xA , xB ) = log
2
det(J0{A,B} )
!
det(J0A ) det(J0B )
,
(3)
which generally requires O(n3 ) operations to compute via Schur complement.
2.3
Gaussian MRFs (GMRFs)
If x ? N ?1 (h, J), the conditional independence structure of px (?) can be represented with a Gaussian MRF (GMRF) G = (V, E), where E is determined by the sparsity pattern of J and the pairwise
Markov property: {i, j} ? E iff Jij 6= 0.
In a scalar GMRF, V indexes scalar components of x. In a vectoral GMRF, V indexes disjoint
subvectors of x, each of potentially different dimension. The block submatrix Jii can be thought of
as specifying the sparsity pattern of the scalar micro-network within the vectoral macro-node i ? V.
2.4
Gaussian Belief Propagation (GaBP)
If x can be partitioned into n subvectors of dimension at most d, and the resulting graph is treeshaped, then all marginal precision matrices J0i , i ? V can be computed by Gaussian belief propagation (GaBP) [10] in O(n ? d3 ). For such trees, one can also compute all edge marginal precision
matrices J0{i,j} , {i, j} ? E, with the same asymptotic complexity of O(n ? d3 ).
In light of (3), pairwise MI quantities between adjacent nodes i and j may be expressed as
I(xi ; xj ) = H(xi ) + H(xj ) ? H(xi , xj ),
1
1
1
= ? ln det(J0i ) ? ln det(J0j ) + ln det(J0{i,j} ),
2
2
2
{i, j} ? E,
(4)
i.e., purely in terms of node and edge marginal precision matrices. Thus, GaBP provides a way of
computing all local pairwise MI quantities in O(n ? d3 ).
Note that Gaussian trees comprise an important class of distributions that subsumes Gaussian hidden
Markov models (HMMs), and GaBP on trees is a generalization of the Kalman filtering/smoothing
algorithms that operate on HMMs. Moreover, the graphical inference community appears to best understand the convergence of message passing algorithms for continuous distributions on subclasses
of multivariate Gaussians (e.g., tree-shaped [10], walk-summable [11], and feedback-separable [12]
models, among others).
3
Problem Statement
Let px (?) = N ?1 (?; h, J) be represented by GMRF G = (V, E), and consider a partition of V into
the subsets of latent nodes U and observable nodes S, with R ? U denoting the subset of relevant
latent variables (i.e., those to be inferred). Given a cost function c : 2S ? R?0 over subsets of
observations, and a budget ? ? R?0 , the focused active inference problem is
maximizeA?S
s.t.
3
I(xR ; xA )
c(A) ? ?.
(5)
The focused active inference problem in (5) is distinguished from the unfocused active inference
problem
maximizeA?S
s.t.
I(xU ; xA )
c(A) ? ?,
(6)
which considers the entirety of the latent state U ? R to be of interest. Both problems are known to
be NP-hard [13, 14].
By the chain rule and nonnegativity of MI, I(xU ; xA ) = I(xR ; xA ) + I(xU \R ; xA | xR ) ?
I(xR ; xA ), for any A ? S. Therefore, maximizing unfocused MI does not imply maximizing
focused MI. Focused active inference must be posed as a separate problem to avoid the situation where the observation selector becomes fixated on inferring nuisance variables as a result of
I(xU \R ; xA | xR ) being included implicitly in the valuation. In fact, an unfocused selector can
perform arbitrarily poorly with respect to a focused metric, as the following example illustrates.
Example 1. Consider a scalar GMRF over a four-node chain (Figure 1a), whereby J13 = J14 =
J24 = 0 by the pairwise Markov property, with R = {2}, S = {1, 4}, c(A) = |A| (i.e., unit-cost observations), and ? = 1. The optimal unfocused decision rule A?(U F ) = argmaxa?{1,4} I(x2 , x3 ; xa )
can be shown, by conditional independence and positive definiteness of J, to reduce to
A?
(U F ) ={4}
|J34 |
|J12 |,
R
A?
={1}
(U F )
independent of J23 , which parameterizes the edge potential between nodes 2 and 3. Conversely, the
optimal focused decision rule A?(F ) = argmaxa?{1,4} I(x2 ; xa ) can be shown to be
A?
(F ) ={4}
|J23 | ? 1{J 2
2
2
2
34 ?J12 J34 ?J12 ?0}
s
R
A?
={1}
(F )
2 )J 2
(1 ? J34
12
,
2
J34
where 1{?} is the indicator function, which evaluates to 1 when its argument is true and 0 otherwise.
The loss associated with optimizing the ?wrong? information measure is demonstrated in Figure 1b.
The reason for this loss is that as |J23 | ? 0+ , the information that node 3 can convey about node 2
also approaches zero, although the unfocused decision rule is oblivious to this fact.
Score vs. |J23| (with J212 = 0.3, J234 = 0.5)
1
x1
Unfocused Policy
Focused Policy
0.9
0.8
x2
x3
I(xR;xA) [nats]
0.7
0.6
0.5
0.4
0.3
0.2
0.1
x4
0
0
0.1
0.2
0.3
0.4
0.5
0.6
|J23|
(a) Graphical model.
(b) Policy comparison.
Figure 1: (a) Graphical model for the four-node chain example. (b) Unfocused vs. focused policy
comparison. There exists a range of values for |J23 | such that the unfocused and focused policies
coincide; however, as |J23 | ? 0+ , the unfocused policy approaches complete performance loss with
respect to the focused measure.
4
1
2
...
k
G?1
G?2
G?k
(a) Unique path with sidegraphs.
(b) Vectoral graph with thin edges.
Figure 2: (a) Example of a nontree graph G with a unique path P?1:k between nodes 1 and k. The
?sidegraph? attached to each node i ? P?1:k is labeled as G?i . (b) Example of a vectoral graph with
thin edges, with internal (scalar) structure depicted.
4
Nonlocal MI Decomposition
For GMRFs with n nodes indexing d-dimensional random subvectors, I(xR ; xA ) can be computed
exactly in O((nd)3 ) via Schur complements/inversions on the precision matrix J. However, certain graph structures permit the computation via belief propagation of all local pairwise MI terms
I(xi ; xj ), for adjacent nodes i, j ? V in O(n ? d3 ) ? a substantial savings for large networks. This
section describes a transformation of nonlocal MI between uniquely path-connected nodes that permits a decomposition into the sum of transformed local MI quantities, i.e., those relating adjacent
nodes in the graph. Furthermore, the local MI terms can be transformed in constant time, yielding
an O(n ? d3 ) for computing any pairwise nonlocal MI quantity coinciding with a unique path.
Definition 1 (Warped MI). For disjoint subsets A, B, C ? V, the warped mutual information measure W : 2V ? 2V ? 2V ? (??, 0] is defined such that W (A; B|C) ,
1
2 log (1 ? exp {?2I(xA ; xB |xC )}).
For convenience, let W (i; j|C) , W ({i}; {j}|C) for i, j ? V.
Remark 2. For i, j ? V indexing scalar nodes, the warped MI of Definition 1 reduces to W (i; j) =
log |?ij |, where ?ij ? [?1, 1] is the correlation coefficient between scalar r.v.s xi and xj . The
measure log |?ij | has long been known to the graphical model learning community as an ?additive
tree distance? [15, 16], and our decomposition for vectoral graphs is a novel application for sensor
selection problems. To the best of the authors? knowledge, the only other distribution class with
established additive distances are tree-shaped symmetric discrete distributions [16], which require a
very limiting parameterization of the potentials functions defined over edges in the factorization of
the joint distribution.
Proposition 3 (Scalar Nonlocal MI Decomposition). For any GMRF G = (V, E) where V indexes
scalar random variables, if |PG (u, v)| = 1 for distinct vertices u, v ? V, then for any C ? V \
{u, v}, I(xu ; xv |xC ) can be decomposed as
X
W (u; v|C) =
W (i; j|C),
(7)
{i,j}?E?u:v
where E?u:v is the set of edges joining consecutive nodes of P?u:v , the unique path between u and v
and sole element of PG (u, v).
(Proofs of this and subsequent propositions can be found in the supplementary material.)
Remark 4. Proposition 3 requires only that the path between vertices u and v be unique. If G is
a tree, this is obviously satisfied. However, the result holds on any graph for which: the subgraph
induced by P?u:v is a chain; and every i ? P?u:v separates N (i) \ P?u:v from P?u:v \ {i}, where
N (i) , {j : {i, j} ? E} is the neighbor set of i. See Figure 2a for an example of a nontree graph
with a unique path.
Definition 5 (Thin Edges). An edge {i, j} ? E of GMRF G = (V, E; J) is thin if the corresponding
submatrix Jij has exactly one nonzero scalar component. (See Figure 2b.)
For vectoral problems, each node may contain a subnetwork of arbitrarily connected scalar random
variables (see Figure 2b). Under the assumption of thin edges (Definition 5), a unique path between
nodes u and v must enter interstitial nodes through one scalar r.v. and leave through one scalar
5
r.v. Therefore, let ?i (u, v|C) ? (??, 0] denote the warped MI between the enter and exit r.v.s of
interstitial vectoral node i on P?u:v , with conditioning set C ? V \ {u, v}.1 Note that ?i (u, v|C) can
be computed online in O(d3 ) via local marginalization given J0i|C , which is an output of GaBP.
Proposition 6 (Vectoral Nonlocal MI Decomposition). For any GMRF G = (V, E) where V indexes
random vectors of dimension at most d and the edges in E are thin, if |PG (u, v)| = 1 for distinct
vertices u, v ? V, then for any C ? V \ {u, v}, I(xu ; xv |xC ) can be decomposed as
X
X
?i (u, v|C).
(8)
W (i; j|C) +
W (u; v|C) =
i?P?u:v \{u,v}
{i,j}?E?u:v
5
(Distributed) Focused Greedy Selection
The nonlocal MI decompositions of Section 4 can be used to efficiently solve the focused greedy
selection problem, which at each iteration, given the subset A ? S of previously selected observable
random variables, is
I(xR ; xy | xA ).
argmax
{y?S\A : c(y)???c(A)}
To proceed, first consider the singleton case R = {r} for r ? U. Running GaBP on the graph
G conditioned on A and subsequently computing all terms W (i; j|A), ?{i, j} ? E incurs a computational cost of O(n ? d3 ). Once GaBP has converged, node r authors an ?r-message? with the
value 0. Each neighbor i ? N (r) receives that message with value modified by W (r; i|A); there
is no ? term because there are no interstitial nodes between r and its neighbors. Subsequently,
each i ? N (r) messages its neighbors j ? N (i) \ {r}, modifying the value of its r-message by
W (i; j|A) + ?i (r, j|A), the latter term being computed online in O(d3 ) from J0i|A , itself an output
of GaBP.2 Then j messages N (j) \ {i}, and so on down to the leaves of the tree. Since there are at
most n?1 edges in a forest, the total cost of dissemination is still O(n?d3 ), after which all nodes y in
the same component as r will have received an r-message whose value on arrival is W (r; y|A), from
which I(xr ; xy |A) can be computed in constant time. Thus, for |R| = 1, all scores I(xR ; xy |xA )
for y ? S \ A can collectively be computed at each iteration of the greedy algorithm in O(n ? d3 ).
Now consider |R| > 1. Let R = (r1 , . . . , r|R| ) be an ordering of the elements of R, and let Rk
be the first k elements of R. Then, by the chain rule of mutual information, I(xR ; xy | xA ) =
P|R|
k=1 I(xrk ; xy | xA?Rk?1 ), y ? S \ A, where each term in the sum is a pairwise (potentially
nonlocal) MI evaluation. The implication is that one can run |R| separate instances of GaBP, each
using a different conditioning set A ? Rk?1 , to compute ?node and edge weights? (W and ? terms)
for the r-message passing scheme outlined above. The chain rule suggests one should then sum the
unwarped r-scores of these |R| instances to yield the scores I(xR ; xy |xA ) for y ? S \ A. The total
cost of a greedy update is then O |R| ? nd3 .
One of the benefits of the focused greedy selection algorithm is its amenability to parallelization.
All quantities needed to form the W and ? terms are derived from GaBP, which is parallelizable and
guaranteed to converge on trees in at most diam(G) iterations [10]. Parallelization reallocates the
expense of quantification across networked computational resources, often leading to faster solution
times and enabling larger problem instantiations than are otherwise permissible. However, full parallelization, wherein each node i ? V is viewed as separate computing resource, incurs a multiplicative
overhead of O(diam(G)) due to each i having to send |N (i)| messages diam(G) times, yielding local communication costs of O(diam(G)|N (i)|?d3 ) and overall complexity of O(diam(G)?|R|?nd3 ).
This overhead can be alleviated by instead assigning to every computational resource a connected
subgraph of G.
1
As node i may have additional neighbors that are not on the u-v path, using the notation ?i (u, v|C) is
a convenient way to implicitly specify the enter/exit scalar r.v.s associated with the path. Any unique path
subsuming u-v, or any unique path subsumed in u-v for which i is interstitial, will have equivalent ?i terms.
2
If i is in the conditioning set, its outgoing message can be set to be ??, so that the nodes it blocks
from reaching r see an apparent information score of 0. Alternatively, i could simply choose not to transmit
r-messages to its neighbors.
6
It should also be noted that if the quantification is instead performed using serial BP ? which can be
conceptualized as choosing an arbitrary root, collecting messages from the leaves up to the root, and
disseminating messages back down again ? a factor of 2 savings can be achieved for R2 , . . . , R|R|
by noting that in moving between instances k and k + 1, only rk is added to the conditioning set.
Therefore, by reassigning rk as the root for the BP instance associated with rk+1 (i.e., A ? Rk as
the conditioning set), only the second half of the message passing schedule (disseminating messages
from the root to the leaves) is necessary. We subsequently refer to this trick as ?caching.?
6
Experiments
To benchmark the runtime performance of the algorithm in Section 5, we implemented its serial
GaBP variant in Java, with and without the caching trick described above.
We compare our algorithm with greedy selectors that use matrix inversion (with cubic complexity)
to compute nonlocal mutual information measures. Let Sfeas := {y ? S \ A : c(y) ? ? ?
c(A)}. At each iteration of the greedy selector, the blocked inversion-based quantifier computes first
J0R?Sfeas |A (entailing a block marginalization of nuisances), from which J0R|A and J0R|A?y , ?y ?
Sfeas , are computed. Then I(xR ; xy | xA ), ?y ? Sfeas , are computed via a variant of (3). The na??ve
inversion-based quantifier computes I(xR ; xy | xA ), ?y ? Sfeas , ?from scratch? by using separate
Schur complements of J submatrices and not storing intermediate results. The inversion-based
quantifiers were implemented in Java using the Colt sparse matrix libraries [17].
Greedy Selection Total Runtimes for Quantification Algorithms
6
10
BP?Quant?Cache
BP?Quant?NoCache
Inv?Quant?Block
Inv?Quant?Naive
5
Mean Runtime [ms]
10
4
10
3
10
2
10
1
10
0
200
400
600
800
1000
1200
1400
1600
1800
2000
n (Network Size)
Figure 3: Performance of GaBP-based and inversion-based quantifiers used in greedy selectors.
For each n, the mean of the runtimes over 20 random scalar problem instances is displayed. Our
BP-Quant algorithm of Section 5 empirically has approximately linear complexity; caching reduces the mean runtime by a factor of approximately 2.
Figure 3 shows the comparative mean runtime performance of each of the quantifiers for scalar
networks of size n, where the mean is taken over the 20 problem instances proposed for each value
of n. Each problem instance consists of a randomly generated, symmetric, positive-definite, treeshaped precision matrix J, along with a randomly labeled S (such that, arbitrarily, |S| = 0.3|V|)
and R (such that |R| = 5), as well as randomly selected budget and heterogeneous costs defined
over S. Note that all selectors return the same greedy selection; we are concerned with how the
decompositions proposed in this paper aid in the computational performance. In the figure, it is
clear that the GaBP-based quantification algorithms of Section 5 vastly outperform both inversionbased methods; for relatively small n, the solution times for the inversion-based methods became
prohibitively long. Conversely, the behavior of the BP-based quantifiers empirically confirms the
asymptotic O(n) complexity of our method for scalar networks.
7
7
Performance Bounds
Due to the presence of nuisances in the model, even if the subgraph induced by S is completely
disconnected, it is not always the case that the nodes in S are conditionally independent when
conditioned on only the relevant latent set R. Lack of conditional independence means one cannot
guarantee submodularity of the information measure, as per [6]. Our approach will be to augment
R such that submodularity is guaranteed and relate the performance bound to this augmented set.
? be any subset such that R ? R
? ? U and such that nodes in S are conditionally independent
Let R
?
conditioned on R. Then, by Corollary 4 of [6], I(xR? ; xA ) is submodular and nondecreasing on S.
Additionally, for the case of unit-cost observations (i.e., c(A) = |A| for all A ? S), a greedily
? of cardinality ? satisfies the performance bound
selected subset Ag? (R)
1
g ?
?
? A)
I(R; A? (R)) ? 1 ?
max
I(R;
(9)
e {A?S:|A|??}
1
? \ R; A|R)]
max
[I(R; A) + I(R
(10)
= 1?
e {A?S:|A|??}
1
? 1?
max
I(R; A),
(11)
e {A?S:|A|??}
where (9) is due to [6], (10) to the chain rule of MI, and (11) to the nonnegativity of MI. The
following proposition follows immediately from (11).
? such that R ? R
? ? U and nodes in S are conditionally independent
Proposition 7. For any set R
? provided I(R;
? Ag (R))
? > 0, an online-computable performance bound for any
conditioned on R,
?
A? ? S in the original focused problem with relevant set R and unit-cost observations is
#
"
?
1
I(R;
A)
?
1?
I(R; A) ?
max
I(R; A).
(12)
? Ag (R))
?
e {A?S:|A|??}
I(R;
?
|
{z
}
? R)
?
, ?R (A,
? R)
? of the optimal obProposition 7 can be used at runtime to determine what percentage ?R (A,
jective is guaranteed, for any focused selector, despite the lack of conditional independence of S
conditioned on R. In order to compute the bound, a greedy heuristic running on a separate, surro? as the relevant set is required. Finding an R
? ? R providing the tightest bound
gate problem with R
is an area of future research.
8
Conclusion
In this paper, we have considered the sensor selection problem on multivariate Gaussian distributions
that, in order to preserve a parsimonious representation, contain nuisances. For pairs of nodes connected in the graph by a unique path, there exist decompositions of nonlocal mutual information into
local MI measures that can be computed efficiently from the output of message passing algorithms.
For tree-shaped models, we have presented a greedy selector where the computational expense of
quantification can be distributed across nodes in the network. Despite deficiency in conditional independence of observations, we have derived an online-computable performance bound based on
an augmentation of the relevant set. Future work will consider extensions of the MI decomposition
to graphs with nonunique paths and/or non-Gaussian distributions, as well as extend the analysis of
augmented relevant sets to derive tighter performance bounds.
Acknowledgments
The authors thank John W. Fisher III, Myung Jin Choi, and Matthew Johnson for helpful discussions
during the preparation of this paper. This work was supported by DARPA Mathematics of Sensing,
Exploitation and Execution (MSEE).
8
References
[1] C. M. Kreucher, A. O. Hero, and K. D. Kastella. An information-based approach to sensor
management in large dynamic networks. Proc. IEEE, Special Issue on Modeling, Identificiation, & Control of Large-Scale Dynamical Systems, 95(5):978?999, May 2007.
[2] H.-L. Choi and J. P. How. Continuous trajectory planning of mobile sensors for informative
forecasting. Automatica, 46(8):1266?1275, 2010.
[3] V. Chandrasekaran, N. Srebro, and P. Harsha. Complexity of inference in graphical models. In
Proc. Uncertainty in Artificial Intelligence, 2008.
[4] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, 2009.
[5] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498?519, Feb 2001.
[6] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models.
In Proc. Uncertainty in Artificial Intelligence (UAI), 2005.
[7] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of approximations for maximizing
submodular set functions. Mathematical Programming, 14:489?498, 1978.
[8] J. L. Williams, J. W. Fisher III, and A. S. Willsky. Performance guarantees for information
theoretic active inference. In M. Meila and X. Shen, editors, Proc. Eleventh Int. Conf. on
Artificial Intelligence and Statistics, pages 616?623, 2007.
[9] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 2nd ed. edition, 2006.
[10] Y. Weiss and W. T. Freeman. Correctness of belief propagation in Gaussian graphical models
of arbitrary topology. Neural Computation, 13(10):2173?2200, 2001.
[11] D. M. Malioutov, J. K. Johnson, and A. S. Willsky. Walk-sums and belief propagation in
Gaussian graphical models. Journal of Machine Learning Research, 7:2031?2064, 2006.
[12] Y. Liu, V. Chandrasekaran, A. Anandkumar, and A. S. Willsky. Feedback message passing for
inference in gaussian graphical models. IEEE Transactions on Signal Processing, 60(8):4135?
4150, Aug 2012.
[13] C. Ko, J. Lee, and M. Queyranne. An exact algorithm for maximum entropy sampling. Operations Research, 43:684?691, 1995.
[14] A. Krause and C. Guestrin. Optimal value of information in graphical models. Journal of
Artificial Intelligence Research, 35:557?591, 2009.
[15] P. L. Erd?os, M. A. Steel, L. A. Sz?ekely, and T. J. Warnow. A few logs suffice to build (almost)
all trees: Part ii. Theoretical Computer Science, 221:77?118, 1999.
[16] M. J. Choi, V. Y. F. Tan, A. Anandkumar, and A. S. Willsky. Learning latent tree graphical
models. Journal of Machine Learning Research, 12:1771?1812, May 2011.
[17] CERN - European Organization for Nuclear Research. Colt, 1999.
9
| 4950 |@word exploitation:1 inversion:7 nd:2 confirms:1 seek:1 decomposition:14 covariance:1 pg:10 incurs:2 reduction:1 liu:1 score:5 selecting:2 daniel:1 denoting:1 loeliger:1 assigning:1 must:2 john:1 subsequent:2 partition:2 informative:3 j1:1 additive:2 update:1 v:2 greedy:15 selected:3 leaf:3 half:1 parameterization:1 intelligence:4 provides:1 node:38 mathematical:1 along:2 differential:1 consists:1 overhead:2 eleventh:1 pairwise:10 behavior:1 planning:1 freeman:1 decomposed:2 cache:1 considering:1 subvectors:4 becomes:1 cardinality:1 provided:1 moreover:1 notation:1 suffice:1 what:1 benchmarked:1 msee:1 ag:3 transformation:1 finding:1 guarantee:2 every:2 collecting:1 subclass:1 runtime:5 exactly:3 prohibitively:1 wrong:1 control:2 j24:1 unit:3 positive:3 frey:1 local:8 xv:2 despite:2 joining:1 analyzing:1 path:19 approximately:2 specifying:1 conversely:2 suggests:1 hmms:2 factorization:1 range:1 unique:13 acknowledgment:1 union:1 block:5 definite:2 x3:2 xr:15 j0:3 area:1 submatrices:1 thought:1 java:2 inferential:3 alleviated:1 convenient:1 argmaxa:2 cannot:2 marginalize:1 selection:15 convenience:1 context:1 equivalent:1 demonstrated:1 maximizing:3 send:1 conceptualized:1 williams:1 starting:1 focused:21 shen:1 gmrf:8 immediately:1 examines:1 parameterizing:1 rule:7 nuclear:1 j12:8 limiting:1 transmit:1 tan:1 exact:1 programming:1 trick:2 element:5 expensive:1 particularly:1 labeled:2 levine:1 connected:6 ordering:1 substantial:1 complexity:6 nats:1 nonadjacent:1 dynamic:1 terminating:1 entailing:1 serve:1 purely:1 efficiency:1 exit:2 completely:1 joint:4 darpa:1 represented:3 distinct:3 artificial:4 choosing:1 whose:1 encoded:1 posed:1 supplementary:1 solve:1 larger:1 apparent:1 otherwise:2 heuristic:1 statistic:1 nondecreasing:1 itself:1 online:6 obviously:1 sequence:1 jij:2 product:1 macro:1 relevant:10 networked:1 realization:1 subgraph:5 iff:2 poorly:1 convergence:1 empty:1 r1:1 comparative:2 leave:1 derive:2 ij:3 sole:2 received:1 aug:1 auxiliary:1 entirety:1 pxa:1 implemented:2 h01:4 submodularity:3 amenability:1 treeshaped:2 subsequently:3 modifying:1 j34:4 j14:1 material:1 adjacency:1 require:1 generalization:1 preliminary:1 proposition:6 tighter:1 extension:1 hold:2 considered:1 exp:2 matthew:1 jx:1 consecutive:1 proc:4 intermediary:1 applicable:2 correctness:1 mit:5 sensor:9 gaussian:21 always:1 modified:1 reaching:1 avoid:1 caching:3 mobile:1 corollary:1 derived:2 contrast:1 greedily:1 helpful:1 inference:15 mrfs:3 hidden:1 koller:1 transformed:2 overall:2 issue:2 among:1 colt:2 augment:1 development:1 smoothing:1 special:1 mutual:7 marginal:4 field:4 equal:1 comprise:2 shaped:3 once:1 sampling:1 runtimes:2 x4:1 having:1 saving:2 reassigning:1 gabp:14 thin:6 future:2 others:1 np:1 j23:7 micro:1 oblivious:1 few:1 randomly:3 preserve:1 ve:1 parsimoniously:1 argmax:1 nd3:2 friedman:1 subsumed:1 organization:1 interest:2 message:18 evaluation:1 yielding:2 light:1 xb:9 chain:10 amenable:1 implication:1 xrk:1 edge:14 necessary:1 xy:8 tree:17 walk:2 theoretical:1 instance:7 modeling:1 cover:1 cost:10 vertex:10 subset:15 johnson:2 gd:1 retain:1 probabilistic:2 lee:1 na:1 connectivity:1 augmentation:4 again:1 satisfied:1 management:1 vastly:1 choose:1 possibly:1 summable:1 conf:1 warped:4 leading:2 return:1 jii:1 potential:3 singleton:1 subsumes:1 includes:1 coefficient:1 int:1 later:1 h1:3 multiplicative:1 performed:1 root:4 accuracy:1 became:1 efficiently:3 likewise:1 yield:1 cern:1 trajectory:1 malioutov:1 converged:1 parallelizable:1 ed:3 definition:4 evaluates:1 energy:1 acquisition:1 associated:4 mi:26 proof:1 knowledge:1 schedule:1 back:1 appears:1 permitted:1 coinciding:1 maximally:1 wherein:1 specify:1 wei:1 erd:1 mar:1 furthermore:1 xa:27 correlation:1 receives:1 o:1 propagation:6 lack:2 ekely:1 requiring:1 true:1 contain:2 symmetric:3 nonzero:1 conditionally:3 adjacent:4 during:1 nuisance:13 uniquely:1 whereby:1 noted:1 m:1 complete:1 demonstrate:1 theoretic:1 novel:1 nonmyopic:1 empirically:2 conditioning:6 attached:1 extend:2 relating:1 refer:1 blocked:1 enter:3 rd:1 meila:1 outlined:1 px1:2 mathematics:1 inclusion:1 submodular:2 moving:1 feb:1 multivariate:5 own:1 optimizing:1 certain:2 arbitrarily:3 guestrin:2 greater:1 additional:1 converge:1 determine:1 signal:1 ii:1 relates:1 full:1 reduces:2 faster:1 long:2 serial:2 mrf:2 variant:2 ko:1 heterogeneous:1 subsuming:1 metric:1 iteration:4 achieved:1 krause:2 permissible:1 parallelization:3 operate:1 induced:4 j11:3 schur:4 anandkumar:2 near:1 noting:1 presence:1 intermediate:1 iii:2 concerned:1 marginalization:3 independence:5 xj:5 topology:1 quant:5 reduce:3 parameterizes:1 computable:4 det:8 motivated:1 j22:3 effort:1 forecasting:1 queyranne:1 passing:6 proceed:1 remark:2 generally:2 clear:1 extensively:1 induces:1 diameter:1 outperform:1 exist:2 percentage:1 extrinsic:1 disjoint:4 per:1 discrete:1 independency:2 four:2 d3:11 ht:1 graph:20 merely:1 sum:5 run:1 uncertainty:4 almost:1 chandrasekaran:2 parsimonious:1 decision:3 submatrix:3 bound:12 guaranteed:3 deficiency:1 bp:6 x2:11 n3:1 surro:1 argument:1 separable:1 px:3 relatively:1 according:1 disconnected:1 dissemination:1 across:3 describes:1 partitioned:1 lid:2 quantifier:6 indexing:2 interstitial:4 taken:1 computationally:1 ln:3 resource:3 previously:1 describing:1 nonempty:1 needed:1 hero:1 operation:2 gaussians:1 jhow:1 permit:3 apply:1 tightest:1 harsha:1 distinguished:1 gate:1 original:1 thomas:1 running:2 graphical:16 xc:4 exploit:1 build:1 implied:1 objective:1 added:1 quantity:6 realized:1 primary:1 pave:1 said:1 exhibit:1 subnetwork:1 nemhauser:1 distance:2 separate:6 thank:1 valuation:3 considers:1 reason:1 willsky:4 kalman:1 index:4 relationship:1 providing:2 equivalently:1 potentially:3 statement:2 expense:4 relate:1 stated:1 steel:1 policy:6 unknown:1 perform:1 upper:1 observation:10 markov:9 benchmark:1 finite:1 enabling:1 nonunique:1 jin:1 displayed:1 situation:1 unwarped:1 communication:2 rn:2 arbitrary:2 community:2 inv:2 inferred:1 complement:4 pair:2 subvector:1 specified:1 required:1 established:1 address:1 beyond:1 dynamical:1 pattern:2 sparsity:3 max:4 belief:6 quantification:6 indicator:1 scheme:1 imply:1 library:1 specializes:1 naive:1 gmrfs:2 asymptotic:2 embedded:1 loss:3 wolsey:1 filtering:1 srebro:1 h2:2 integrate:2 myung:1 principle:1 editor:1 storing:1 repeat:1 supported:1 formal:1 understand:1 neighbor:6 sparse:1 distributed:7 benefit:2 feedback:2 dimension:3 valid:1 computes:2 author:3 coincide:1 transaction:2 nonlocal:14 selector:10 observable:3 implicitly:2 sz:1 global:2 active:9 instantiation:1 uai:1 fixated:1 automatica:1 xi:5 alternatively:2 continuous:2 latent:10 reviewed:1 additionally:2 kschischang:1 forest:2 european:1 arise:1 arrival:1 edition:1 convey:1 x1:6 xu:6 augmented:2 cubic:1 definiteness:1 aid:1 wiley:1 precision:8 inferring:1 nonnegativity:2 warnow:1 down:2 rk:7 choi:3 xt:1 sensing:2 r2:1 exists:2 importance:1 execution:1 conditioned:6 budget:2 illustrates:1 locality:1 entropy:3 depicted:1 simply:1 desire:1 expressed:1 scalar:18 collectively:1 satisfies:1 conditional:10 diam:5 viewed:1 absence:1 fisher:3 hard:1 included:1 specifically:1 determined:1 called:1 total:3 experimental:1 internal:1 latter:2 jonathan:1 preparation:1 outgoing:1 scratch:1 |
4,366 | 4,951 | ?-Optimality for Active Learning on Gaussian
Random Fields
Yifei Ma
Machine Learning Department
Carnegie Mellon University
[email protected]
Roman Garnett
Computer Science Department
University of Bonn
[email protected]
Jeff Schneider
Robotics Institute
Carnegie Mellon University
[email protected]
Abstract
A common classifier for unlabeled nodes on undirected graphs uses label propagation from the labeled nodes, equivalent to the harmonic predictor on Gaussian random fields (GRFs). For active learning on GRFs, the commonly used V-optimality
criterion queries nodes that reduce the L2 (regression) loss. V-optimality satisfies a submodularity property showing that greedy reduction produces a (1 ? 1/e)
globally optimal solution. However, L2 loss may not characterise the true nature
of 0/1 loss in classification problems and thus may not be the best choice for active
learning.
We consider a new criterion we call ?-optimality, which queries the node that
minimizes the sum of the elements in the predictive covariance. ?-optimality
directly optimizes the risk of the surveying problem, which is to determine the
proportion of nodes belonging to one class. In this paper we extend submodularity
guarantees from V-optimality to ?-optimality using properties specific to GRFs.
We further show that GRFs satisfy the suppressor-free condition in addition to
the conditional independence inherited from Markov random fields. We test ?optimality on real-world graphs with both synthetic and real data and show that it
outperforms V-optimality and other related methods on classification.
1
Introduction
Real-world data are often presented as a graph where the nodes in the graph bear labels that vary
smoothly along edges. For example, for scientific publications, the content of one paper is highly
correlated with the content of papers that it references or is referenced by, the field of interest of a
scholar is highly correlated with other scholars s/he coauthors with, etc. Many of these networks
can be described using an undirected graph with nonnegative edge weights set to be the strengths of
the connections between nodes.
The model for label prediction in this paper is the harmonic function on the Gaussian random field
(GRF) by Zhu et al. (2003). It can generalize two popular and intuitive algorithms: label propagation
(Zhu & Ghahramani, 2002), and random walk with absorptions (Wu et al., 2012). GRFs can be
seen as a Gaussian process (GP) (Rasmussen & Williams, 2006) with its (maybe improper) prior
covariance matrix whose (pseudo)inverse is set to be the graph Laplacian.
Like other learning problems, labels may be insufficient and expensive to gather, especially if one
wants to discover a new phenomenon on a graph. Active learning addresses these issues by making
automated decisions on which nodes to query for labels from experts or the crowd. Some popular
criteria are empirical risk minimization (Settles, 2010; Zhu et al., 2003), mutual information gain
(Krause et al., 2008), and V-optimality (Ji & Han, 2012). Here we consider an alternative criterion,
?-optimality, and establish several related theoretical results. Namely, we show that greedy reduction of ?-optimality provides a (1 ? 1/e) approximation bound to the global optimum. We also show
1
that Gaussian random fields satisfy the suppressor-free condition, described below. Finally, we show
that ?-optimality outperforms other approaches for active learning with GRFs for classification.
1.1
V-optimality on Gaussian Random Fields
Ji & Han (2012) proposed greedy variance minimization as a cheap and high profile surrogate active
classification criterion. To decide which node to query next, the active learning algorithm finds the
unlabeled node which leads to the smallest average predictive variance on all other unlabeled nodes.
It corresponds to standard V-optimality in optimal experiment design.
We will discuss several aspects of V-optimality on GRFs below: 1. The motivation behind Voptimality can be paraphrased as the expected risk minimization with the L2 -surrogate loss (Section 2.1). 2. The greedy solution to the set optimization problem in V-optimality is comparable to
the global solution up to a constant (Theorem 1). 3. The greedy application of V-optimality can
also be interpreted as a heuristic which selects nodes that have high correlation to nodes with high
variances (Observation 4).
Some previous work is related to point 2 above. Nemhauser et al. (1978) shows that any submodular,
monotone and normalized set function yields a (1 ? 1/e) global optimality guarantee for greedy
solutions. Our proof techniques coincides with Friedland & Gaubert (2011) in principle, but we
are not restricted to spectral functions. Krause et al. (2008) showed a counter example where the
V-optimality objective function with GP models does not satisfy submodularity.
1.2
?-optimality on Gaussian Random Fields
We define ?-optimality on GRFs to be another variance minimization criterion that minimizes the
sum of all entries in the predictive covariance matrix. As we will show in Lemma 7, the predictive
covariance matrix is nonnegative entry-wise and thus the definition is proper. ?-optimality was originally proposed by Garnett et al. (2012) in the context of active surveying, which is to determine the
proportion of nodes belonging to one class. However, we focus on its performance as a criterion in
active classification heuristics. The survey-risk of ?-optimality replaces the L2 -risk of V-optimality
as an alternative surrogate risk for the 0/1-risk.
We also prove that the greedy application of ?-optimality has a similar theoretical bound as Voptimality. We will show that greedily minimizing ?-optimality empirically outperforms greedily
minimizing V-optimality on classification problems. The exact reason explaining the superiority of
?-optimality as a surrogate loss in the GRF model is still an open question, but we observe that
?-optimality tends to select cluster centers whereas V-optimality goes after outliers (Section 3.1).
Finally, greedy application of both ?-optimality and V-optimality need O(N ) time per query candidate evaluation after one-time inverse of a N ? N matrix.
1.3
GRFs
Are Suppressor Free
In linear regression, an explanatory variable is called a suppressor if adding it as a new variable
enhances correlations between the old variables and the dependent variable (Walker, 2003; Das &
Kempe, 2008). Suppressors are persistent in real-world data. We show GRFs to be suppressorfree. Intuitively, this means that with more labels acquired, the conditional correlation between
unlabeled nodes decreases even when their Markov blanket has not formed. That GRFs present
natural examples for the otherwise obscure suppressor-free condition is interesting.
2
Learning Model & Active Learning Objectives
We use Gaussian random field/label propagation (GRF / LP) as our learning model. Suppose the
dataset can be represented in the form of a connected undirected graph G = (V, E) where each
node has an (either known or unknown) label and each edge eij has a fixed nonnegative weight
wij (= wji ) that reflects the proximity, similarity, etc. between
nodes vi and vj . Define the graph
P
Laplacian of G to be L = diag (W 1) ? W , i.e., lii = j wij and lij = ?wij when i 6= j. Let
L? = L + ?I be the generalized Laplacian obtained by adding self-loops. In the following, we will
write L to also encompass ?L? for the set of hyper-parameters ? > 0 and ? ? 0.
2
The binary GRF is a Bayesian model to generate yi ? {0, +1} for every node vi according to,
n ?X
X o
1 T
2
2
p(y) ? exp ?
= exp ? y Ly .
wij (yi ? yj ) + ?
yi
(2.1)
2 i,j
2
i
Suppose nodes ` = {v`1 , . . . , v`|`| } are labeled as y` = (y`1 , . . . , y`|`| )T ; A GRF infers the output
distribution on unlabeled nodes, yu = (yu1 , . . . , yu|u| )T by the conditional distribution given y` , as
Pr(yu |y` ) ? N (?
yu , L?1
yu , L?1
u ) = N (?
(v?`) ),
(2.2)
(?L?1
u Lu` y` )
where y
?u =
is the vector of predictive means on unlabeled nodes and Lu is the
principal submatrix
consisting
of the unlabeled row and column indices in L, that is, the lower-right
L` L`u
block of L =
. By convention, L?1
(v?`) means the inverse of the principal submatrix.
Lu` Lu
We use L(v?`) and Lu interchangeably because ` and u partition the set of all nodes v.
Finally, GRF, or GRF / LP, is a relaxation of the binary GRF to continuous outputs, because the latter is
computationally intractable even for a-priori generations. LP stands for label propagation, because
the predictive mean on a node is the probability of a random walk leaving that node hitting a positive
label before hitting a zero label. For multi-class problems, Zhu et al. (2003) proposed the harmonic
predictor which looks at predictive means in one-versus-all comparisons.
Remark: An alternative approximation to the binary GRF is the GRF-sigmoid model, which draws
the binary outputs from Bernoulli distributions with means set to be the sigmoid function of the GRF
(latent) variables. However, this alternative is very slow to compute and may not be compatible with
the theoretical results in this paper.
2.1
Active Learning Objective 1: L2 Risk Minimization (V-Optimality)
Since in GRFs, regression responses are taken directly as probability predictions, it is computationally and analytically more convenient to apply the regression loss directly in the GRF as in Ji & Han
(2012). Assume the L2 loss to be our classification loss. The risk function, whose input variable is
the labeled subset `, is:
##
" "
X
X
RV (`) = Ey` yu
(yui ? y?ui )2 = E E
(yui ? y?ui )2 y` = tr(L?1
(2.3)
u ).
u ?u
u ?u
i
i
This risk is written with a subscript V because minimizing (2.3) is also the V-optimality criterion,
which minimizes mean prediction variance in active learning.
In active learning, we strive to select a subset ` of nodes to query for labels, constrained by a given
budget C, such that the risk is minimized. Formally,
arg min R(`) = RV (`) = tr(L?1
(2.4)
(v?`) ).
`: |`|?C
2.2
Active Learning Objective 2: Survey Risk Minimization (?-Optimality)
Another objective building on the GRF model (2.2) is to determine the proportion of nodes belonging
to class 1, as would happen when performing a survey. For active surveying, the risk would be:
X
X
2
2
R? (`) = Ey` yu
yui ?
y?ui = E E 1T yu ? 1T y?u |y` = 1T L?1
(2.5)
u 1,
ui ?u
ui ?u
which could substitute the risk R(`) in (2.4) and yield another heuristic for selecting nodes in batch
active learning. We will refer to this modified optimization objective as the ?-optimality heuristic:
arg min R(`) = R? (`) = 1T L?1
(2.6)
(v?`) 1.
`: |`|?C
Further, we will also consider the application of ?-optimality in active classification because (2.6) is
another metric of the predictive variance. Surprisingly, although both (2.3) and (2.5) are approximations of the real objective (the 0/1 risk), greedy reduction of the ?-optimality criterion outperforms
greedy reduction of the V-optimality criterion in active classification (Section 3.1 and 5.1), as well
as several other methods including expected error reduction.
3
2.3
Greedy Sequential Application of V/?-Optimality
Both (2.4) and (2.6) are subset optimization problems. Calculating the global optimum may be
intractable. As will be shown later in the theoretical results, the reduction of both risks are submodular set functions and the greedy sequential update algorithm yields a solution that has a guaranteed
approximation ratio to the optimum (Theorem 1).
At the k-th query decision, denote the covariance matrix conditioned on the previous (k ? 1) queries
as C = (L(v?`(k?1) ) )?1 . By Shur?s Lemma (or the GP-regression update rule), the one-step lookahead covariance matrix conditioned on `(k?1) ? {v}, denoted as C0 = (L(v?(`(k?1) ?{v})) )?1 , has
the following update formula:
0
1
C 0
=C?
? C:v Cv: ,
(2.7)
0 0
Cvv
where without loss of generality v was positioned as the last node. Further denoting Cij = ?ij ?i ?j ,
we can put (2.7) inside R? (?) and RV (?) to get the following equivalent criteria:
P
2
X
(k)
t?u (Cvt )
V-optimality : v? = arg max
=
?2vt ?t2 ,
(2.8)
Cvv
v?u
t?u
P
X
( t?u Cvt )2
(k)
=(
?vt ?t )2 .
(2.9)
?-optimality : v? = arg max
Cvv
v?u
t?u
3
Theoretical Results & Insights
For the general GP model, greedy optimization of the L2 risk has no guarantee that the solution
can be comparable to the brute-force global optimum (taking exponential time to compute), because
the objective function, the trace of the predictive covariance matrix, fails to satisfy submodularity
in all cases (Krause et al., 2008). However, in the special case of GPs with kernel matrix equal to
the inverse of a graph Laplacian (with ` 6= ? or ? > 0), the GRF does provide such theoretical
guarantees, both for V-optimality and ?-optimality. The latter is a novel result.
The following theoretical results concern greedy maximization of the risk reduction function (which
is shown to be submodular): R? (`) = R(?) ? R(`) for either R(?) = RV (?) or R? (?).
Theorem 1 (Near-optimal guarantee for greedy applications of V/?-optimality). In risk reduction,
R? (`g ) ? (1 ? 1/e) ? R? (`? ),
(3.1)
where R? (`) = R(?) ? R(`) for either R(?) = RV (?) or R? (?), e is Euler?s number, `g is the
greedy optimizer, and `? is the true global optimizer under the constraint |`? | ? |`g |.
According to Nemhauser et al. (1978), it suffices to show the following properties of R? (`):
Lemma 2 (Normalization, Monotonicity, and Submodularity). ?`1 ? `2 ? v, v ? v,
R? (?) = 0,
(3.2)
R? (`2 ) ? R? (`1 ),
(3.3)
R? `1 ? {v} ? R? (`1 ) ? R? `2 ? {v} ? R? (`2 ).
(3.4)
Another sufficient condition for Theorem 1, which is itself an interesting observation, is the
suppressor-free condition. Walker (2003) describes a suppressor as a variable, knowing which will
suddenly create a strong correlation between the predictors. An example is yi + yj = yk . Knowing
any one of these will create correlations between the others. Walker further states that suppressors
are common in regression problems. Das & Kempe (2008) extend the suppressor-free condition to
sets and showed that this condition is sufficient to prove (2.3). Formally, the condition is:
corr(yi , yj | `1 ? `2 ) ? corr(yi , yj | `1 )
?vi , vj ? v, ?`1 , `2 ? v.
(3.5)
It may be easier to understand (3.5) as a decreasing correlation property. It is well known for
Markov random fields that the labels of two nodes on a graph become independent given labels of
their Markov blanket. Here we establish that GRF boasts more than that: the correlation between any
two nodes decreases as more nodes get labeled, even before a Markov blanket is formed. Formally:
4
Theorem 3 (Suppressor-Free Condition). (3.5) holds for pairs of nodes in the GRF model. Note
that since the conditional covariance of the GRF model is L?1
(v?`) , we can properly define the corresponding conditional correlation to be
1
?1
? 12
L
corr(yu |`) = D? 2 L?1
D
,
with
D
=
diag
(3.6)
(v?`) .
(v?`)
Insights From Comparing the Greedy Applications of the ?/V-Optimality Criteria
3.1
Both the V/?-optimality are approximations to the 0/1 risk minimization objective. Unfortunately,
we cannot theoretically reason why greedy ?-optimality outperforms V-optimality in the experiments. However, we made two observations during our investigation that provide some insights. An
illustrative toy example is also provided in Section 5.1.
Observation 4. Eq. (2.8) and (2.9) suggest that both the greedy ?/V-optimality selects nodes that
(1) have high variance and (2) are highly correlated to high-variance nodes, conditioned on the
labeled nodes. Notice Lemma 7 proves that predictive correlations are always nonnegative.
In order to contrast ?/V-optimality, rewrite (2.9) as:
P
P
P
(?-optimality) : arg max ( t?u ?vt ?t )2 = t?u ?2vt ?t2 + t1 6=t2 ?u ?vt1 ?vt2 ?t1 ?t2 .
(3.7)
v?u
Observation 5. ?-optimality has one more term that involves cross products of (?vt1 ?t1 ) and
(?vt2 ?t2 ) (which are nonnegative according to Lemma 9). By the Cauchy?Schwartz Inequality,
the sum of these cross products are maximized when they are equal. So, the ?-optimality additionally favors nodes that (3) have consistent global influence, i.e., that are more likely to be in cluster
centers.
4
Proof Sketches
Our results predicate on and extend to GPs whose inverse covariance matrix meets Proposition 6.
Proposition 6. L satisfies the following.
#
p6.1
p6.2
p6.3
p6.4
1
Textual description
Mathematical expression
L has proper signs.
L is undirected and connected.
Node degree no less than number of edges.
L is nonsingular and positive-definite.
lij ? 0 if i = j and
Plij ? 0 if i 6= j.
lij = P
lji ?i, j and j6P
=i (?lij ) > 0.
lii ? j6=P
(?l
)
=
ij
i
j6=P
i (?lji ) > 0, ?i.
?i : lii > j6=i (?lij ) = j6=i (?lji ) > 0.
Although the properties of V-optimality fall into the more general class of spectral functions (Friedland & Gaubert, 2011), we have seen no proof of either the suppressor-free condition or the submodularity of ?-optimality on GRFs. We write the ideas behind the proofs. Details are in the appendix.2
Lemma 7. For any L satisfying (p6.1-4), L?1 ? 0 entry-wise.3
Proof. Sketch: Suppose L = D ? W = D(I ? D?1 W ), with D = diag (L). Then we can show
the convergence of the Taylor expansion (Appendix A.1):
P?
(4.1)
L?1 = [I + r=1 (D?1 W )r ]D?1 .
It suffices to observe that every term on the right hand side (RHS) is nonnegative.
|`|
Corollary 8. The GRF prediction operator L?1
to y
?u = ?L?1
u Lul maps y` ? [0, 1]
u Lul y` ?
|u|
[0, 1] . When L is singular, the mapping is onto.
1
Property p6.4 holds after the first query is done or when the regularizor ? > 0 in (2.1).
Available at http://www.autonlab.org/autonweb/21763.html
3
In the following, for any vector or matrix A, A ? 0 always stands for A being (entry-wise) nonnegative.
2
5
Proof. For y` = 1, (Lu , Lul ) ? 1 ? 0 and L?1
? 0 imply I, L?1
u Lul ? 1 ? 0, i.e. 1 ?
u
?L?1
?u .
u Lul 1 = y
0
As both Lu ? 0 and ?Lul ? 0, we have y` ? 0 ? y
?u ? 0 and y` ? y`0 ? y
?u ? y
?u
.
?1
L11 L12
L11 0
Lemma 9. Suppose L =
. Then L?1 ?
? 0 and is positive-semidefinite.
L21 L22
0
0
Proof. As L?1 ? 0 andis PSD, the
is term-wise
nonnegative and the middle term PSD
RHS
below
L?1
0
L?1
(?L12 )
?1
11
11
(Appendix A.2): L ?
=
(L22 ?L21 L?1
L12 )?1 (?L21 )L?1
,I
11
11
0
0
I
As a corollary, the monotonicity in (3.3) for both R(?) = RV (?) or R? (?) can be shown.
Both proofs for submodularity in (3.4) and Theorem 3 result from more careful execution of matrix
inversions similar to Lemma 9 (detailed in Appendix A.4). We sketch Theorem 3 for example.
Proof. Without loss of generality, let u = v ? ` = {1, . . . , k}. By Shur?s Lemma (Appendix A.3):
(L?1
Cov(yi , yk |`)
Au bu
(v?`) )ik
L(v?`) :=
?
= ?1
= (A?1
(4.2)
T
u (?bu ))i , ?i 6= k
bu c u
Var(yk |`)
(L(v?`) )kk
where the LHS is a reparamatrization with cu being a scaler. Lemma 9 shows that u1 ? u2 ?
?1
A?1
u1 ? Au2 at corresponding entries. Also notice that ?bu1 ? ?bu2 at corresponding entries and
so the RHS of (4.2) is larger with u1 . It suffices to draw a similar inequality in the other direction,
Cov(yk , yi |`)/ Var(yi |`).
5
5.1
A Toy Example and Some Simulations
Comparing V-Optimality and ?-Optimality: Active Node Classification on a Graph
To visualize the intuitions described in Section 3.1, Figure 1 shows the first few nodes
selected by different optimality criteria. This
graph is constructed by a breadth-first search
from a random node in a larger DBLP coauthorship network graph that we will introduce
in the next section. On this toy graph, both criteria pick the same center node to query first.
However, for the second and third queries, Voptimality weighs the uncertainty of the candidate node more, choosing outliers, whereas
?-optimality favors nodes with universal influence over the graph and goes to cluster centers.
5.2
class 1
class 2
class 3
??optimality
V?optimality
Simulating Labels on a Graph
Figure 1: Toy graph demonstrating the behavior
To further investigate the behavior of ?- and V - of ?-optimality vs. V-optimality.
optimality, we conducted experiments on synthetic labels generated on real-world network graphs. The node labels were first simulated using the
model in order to compare the active learning criteria directly without raising questions of model fit.
We carry out tests on the same graphs with real data in the next section.
We simulated the binary labels with the GRF-sigmoid model and performed active learning with
the GRF / LP model for predictions. The parameters in the generation phase were ? = 0.01 and
? = 0.05, which maximizes the average classification accuracy increases from 50 random training
nodes to 200 random training nodes using the GRF / LP model for predictions. Figure 2 shows the
binary classification accuracy versus the number of queries on both the DBLP coauthorship graph
6
0.68
0.6
classification accuracy
classification accuracy
0.66
0.64
0.62
0.6
0.58
0.56
??optimality
V?optimality
Random
0.54
0.52
0.5
0
50
100
150
0.58
0.56
0.54
0.5
0
200
number of queries
(a) DBLP coauthorship. 68.3% LOO accuracy.
??optimality
V?optimality
Random
0.52
50
100
150
200
number of queries
(b) CORA citation. 60.5% LOO accuracy.
Figure 2: Simulating binary labels by the GRF-Sigmoid; learning with the GRF / LP, 480 repetitions.
and the CORA citation graph that we will describe below. The best possible classification results are
indicated by the leave-one-out (LOO) accuracies given under each plot.
Figure 2 can be a surprise due to the reasoning behind the L2 surrogate loss, especially when the
predictive means are trapped between [?1, 1], but we see here that our reasoning in Sections (3.1
and 5.1) can lead to the greedy survey loss actually making a better active learning objective.
We have also performed experiments with different values of ? and ?. Despite the fact that larger
? and ? increase label independence on the graph structure and undermine the effectiveness of both
V/?-optimality heuristics, we have seen that whenever the V-optimality establishes a superiority
over random selections, ?-optimality yields better performance.
6
Real-World Experiments
The active learning heuristics to be compared are:4
1. The new ?-optimality with greedy sequential updates: minv0 1> (Luk \{v0 } )?1 1 .
2. Greedy V-optimality (Ji & Han, 2012): minv0 tr (Luk \{v0 } )?1 .
3. Mutual information gain (MIG) (Krause et al., 2008): maxv0 L?1
(L`k ?{v0 } )?1 v0 ,v0
uk v 0 ,v 0
(1)
(2)
4. Uncertainty sampling (US) picking the largest prediction margin: maxv0 y?v0 ? y?v0 .
5. Expected error reduction (EER) (Settles, 2010; Zhu et al., h
2003). Selected
nodes
imaximize
P
(1)
0
0
the average prediction confidence in expectation: maxv Eyv0
?ui yv y`k .
ui ?u y
6. Random selection with 12 repetitions.
Comparisons are made on three real-world network graphs.
1. DBLP coauthorship network.5 The nodes represent scholars and the weighted edges are the
number of papers bearing both scholars? names. The largest connected component has 1711
nodes and 2898 edges. The node labels were hand assigned in Ji & Han (2012) to one of the
four expertise areas of the scholars: machine learning, data mining, information retrieval, and
databases. Each class has around 400 nodes.
2. Cora citation network.6 This is a citation graph of 2708 publications, each of which is classified
into one of seven classes: case based, genetic algorithms, neural networks, probabilistic methods,
reinforcement learning, rule learning, and theory. The network has 5429 links. We took its
largest connected component, with 2485 nodes and 5069 undirected and unweighted edges.
4
Code available at http://www.autonlab.org/autonweb/21763
http://www.informatik.uni-trier.de/?ley/db/
6
http://www.cs.umd.edu/projects/linqs/projects/lbc/index.html
5
7
0.65
0.8
0.7
0.6
0.7
0.55
0.6
0.6
0.5
0.45
0.5
0.5
0.4
0.4
0.35
??opt
V?opt
Rand
MIG
Unc
EER
0.3
0.25
0.2
0
10
(a)
DBLP .
20
30
40
84% LOO accuracy.
50
??opt
V?opt
Rand
MIG
Unc
EER
0.3
0.2
0.1
0
(b)
10
CORA .
20
30
40
50
86& LOO accuracy.
0.4
??opt
V?opt
Rand
MIG
Unc
EER
0.3
0.2
0
(c)
10
CITESEER
20
30
40
50
76% LOO accuracy.
Figure 3: Classification accuracy vs the number of queries. ? = 1, ? = 0. Randomized first query.
3. CiteSeer citation network.6 This is another citation graph of 3312 publications, each of which
is classified into one of six classes: agents, artificial intelligence, databases, information retrieval,
machine learning, human computer interaction. The network has 4732 links. We took its largest
connected component, with 2109 nodes and 3665 undirected and unweighted edges.
On all three datasets, ?-optimality outperforms other methods by a large margin especially during
the first five to ten queries. The runner-up, EER, catches up to ?-optimality in some cases, but EER
does not have theoretical guarantees.
The win of ?-optimality over V-optimality has been intuitively explained in Section 5.1 as ?optimality having better exploration ability and robustness against outliers. The node choices by
both criteria were also visually inspected after embedding the graph to the 2-dimensional space using OpenOrd method developed by Martin et al. (2011). The analysis there was similar to Figure 1.
We also performed real-world experiments on the root-mean-square-error of the class proportion estimations, which is the survey risk that the ?-optimality minimizes. ?-optimality beats V-optimality.
Details were omitted for space concerns.
7
Conclusion
For active learning on GRFs, it is common to use variance minimization criteria with greedy onestep lookahead heuristics. V-optimality and ?-optimality are two criteria based on statistics of the
predictive covariance matrix. They both are also risk minimization criteria: V-optimality minimizes
the L2 risk (2.3), whereas ?-optimality minimizes the survey risk (2.5).
Active learning with both criteria can be seen as subset optimization problems (2.4), (2.6). Both
objective functions are supermodular set functions. Therefore, risk reduction is submodular and the
greedy one-step lookahead heuristics can achieve a (1 ? 1/e) global optimality ratio. Moreover, we
have shown that GRFs serve as a tangible example of the suppressor-free condition.
While the V-optimality on GRFs inherits from label propagation (and random walk with absorptions)
and have good empirical performance, it is not directly minimizing the 0/1 classification risk. We
found that the ?-optimality performs even better. The intuition is described in Section 5.1.
Future work include deeper understanding of the direct motivations behind ?-optimality on the GRF
classification model and extending the GRF to continuous spaces.
Acknowledgments
This work is funded in part by NSF grant IIS0911032 and DARPA grant FA87501220324.
8
References
Das, Abhimanyu and Kempe, David. Algorithms for subset selection in linear regression. In Proceedings of the 40th annual ACM symposium on Theory of computing, pp. 45?54. ACM, 2008.
Friedland, S and Gaubert, S. Submodular spectral functions of principal submatrices of a hermitian
matrix, extensions and applications. Linear Algebra and its Applications, 2011.
Garnett, Roman, Krishnamurthy, Yamuna, Xiong, Xuehan, Schneider, Jeff, and Mann, Richard.
Bayesian optimal active search and surveying. In ICML, 2012.
Ji, Ming and Han, Jiawei. A variance minimization criterion to active learning on graphs. In AISTAT,
2012.
Krause, Andreas, Singh, Ajit, and Guestrin, Carlos. Near-optimal sensor placements in gaussian
processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research (JMLR), 9:235?284, February 2008.
Martin, Shawn, Brown, W Michael, Klavans, Richard, and Boyack, Kevin W. Openord: an opensource toolbox for large graph layout. In IS&T/SPIE Electronic Imaging, pp. 786806?786806.
International Society for Optics and Photonics, 2011.
Nemhauser, George L, Wolsey, Laurence A, and Fisher, Marshall L. An analysis of approximations
for maximizing submodular set functionsi. Mathematical Programming, 14(1):265?294, 1978.
Rasmussen, Carl Edward and Williams, Christopher KI. Gaussian processes for machine learning,
volume 1. MIT press Cambridge, MA, 2006.
Settles, Burr. Active learning literature survey. University of Wisconsin, Madison, 2010.
Walker, David A. Suppressor variable (s) importance within a regression model: an example of
salary compression from career services. Journal of College Student Development, 44(1):127?
133, 2003.
Wu, Xiao-Ming, Li, Zhenguo, So, Anthony Man-Cho, Wright, John, and Chang, Shih-Fu. Learning
with partially absorbing random walks. In Advances in Neural Information Processing Systems
25, pp. 3086?3094, 2012.
Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label propagation. Technical report, Technical Report CMU-CALD-02-107, Carnegie Mellon University,
2002.
Zhu, Xiaojin, Lafferty, John, and Ghahramani, Zoubin. Combining active learning and semisupervised learning using gaussian fields and harmonic functions. In ICML 2003 workshop on The
Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pp. 58?65,
2003.
9
| 4951 |@word luk:2 cu:1 middle:1 inversion:1 compression:1 proportion:4 laurence:1 c0:1 open:1 simulation:1 covariance:10 citeseer:2 pick:1 tr:3 carry:1 reduction:10 selecting:1 denoting:1 genetic:1 outperforms:6 comparing:2 written:1 john:2 partition:1 happen:1 cheap:1 plot:1 update:4 maxv:1 v:2 greedy:24 selected:2 intelligence:1 provides:1 node:54 org:2 five:1 mathematical:2 along:1 constructed:1 direct:1 become:1 symposium:1 ik:1 persistent:1 prove:2 yu1:1 burr:1 inside:1 hermitian:1 introduce:1 theoretically:1 acquired:1 expected:3 behavior:2 multi:1 globally:1 decreasing:1 ming:2 provided:1 discover:1 project:2 moreover:1 maximizes:1 interpreted:1 minimizes:6 surveying:4 developed:1 guarantee:6 pseudo:1 every:2 classifier:1 schwartz:1 brute:1 uk:1 ly:1 grant:2 superiority:2 positive:3 before:2 t1:3 referenced:1 service:1 tends:1 despite:1 subscript:1 meet:1 au:1 acknowledgment:1 yj:4 block:1 definite:1 area:1 empirical:3 universal:1 submatrices:1 convenient:1 confidence:1 eer:6 suggest:1 zoubin:2 get:2 cannot:1 unlabeled:9 onto:1 operator:1 selection:3 put:1 risk:26 context:1 influence:2 unc:3 www:4 equivalent:2 map:1 center:4 cvt:2 maximizing:1 williams:2 go:2 layout:1 survey:7 autonlab:2 rule:2 insight:3 embedding:1 tangible:1 krishnamurthy:1 suppose:4 inspected:1 exact:1 programming:1 gps:2 us:1 carl:1 element:1 expensive:1 satisfying:1 labeled:7 database:2 improper:1 connected:5 counter:1 decrease:2 yk:4 intuition:2 lji:3 ui:7 rgarnett:1 singh:1 rewrite:1 algebra:1 predictive:12 serve:1 darpa:1 represented:1 lul:6 describe:1 vt1:2 query:17 artificial:1 hyper:1 choosing:1 crowd:1 kevin:1 whose:3 heuristic:8 larger:3 otherwise:1 favor:2 cov:2 ability:1 statistic:1 gp:4 itself:1 au2:1 took:2 interaction:1 product:2 loop:1 combining:1 achieve:1 lookahead:3 grf:25 intuitive:1 description:1 aistat:1 mig:4 convergence:1 cluster:3 optimum:4 extending:1 produce:1 leave:1 ij:2 eq:1 fa87501220324:1 paraphrased:1 strong:1 c:3 involves:1 blanket:3 edward:1 convention:1 minv0:2 direction:1 submodularity:7 ley:1 exploration:1 human:1 settle:3 mann:1 vt2:2 suffices:3 scholar:5 investigation:1 proposition:2 opt:6 absorption:2 extension:1 hold:2 proximity:1 around:1 wright:1 exp:2 visually:1 mapping:1 visualize:1 vary:1 optimizer:2 smallest:1 omitted:1 continuum:1 estimation:1 label:24 largest:4 repetition:2 create:2 establishes:1 reflects:1 weighted:1 minimization:10 cora:4 mit:1 sensor:1 gaussian:11 always:2 modified:1 publication:3 corollary:2 focus:1 inherits:1 properly:1 bernoulli:1 contrast:1 greedily:2 dependent:1 jiawei:1 abhimanyu:1 explanatory:1 wij:4 selects:2 issue:1 classification:18 arg:5 html:2 denoted:1 priori:1 development:1 constrained:1 special:1 kempe:3 mutual:2 field:11 equal:2 having:1 sampling:1 yu:9 look:1 icml:2 future:1 minimized:1 report:2 t2:5 others:1 roman:2 few:1 richard:2 phase:1 consisting:1 psd:2 interest:1 highly:3 investigate:1 mining:2 evaluation:1 runner:1 photonics:1 semidefinite:1 behind:4 yui:3 maxv0:2 edge:8 fu:1 lh:1 old:1 taylor:1 walk:4 weighs:1 theoretical:8 column:1 marshall:1 maximization:1 subset:5 euler:1 entry:6 predictor:3 predicate:1 conducted:1 loo:6 synthetic:2 cho:1 international:1 randomized:1 bu:3 probabilistic:1 linqs:1 picking:1 michael:1 l22:2 lii:3 expert:1 strive:1 toy:4 li:1 de:2 student:1 satisfy:4 vi:3 later:1 performed:3 root:1 yv:1 carlos:1 inherited:1 formed:2 square:1 accuracy:11 opensource:1 variance:10 maximized:1 yield:4 nonsingular:1 generalize:1 bayesian:2 informatik:1 lu:7 expertise:1 j6:4 classified:2 l21:3 whenever:1 definition:1 against:1 pp:4 proof:9 spie:1 gain:2 dataset:1 popular:2 infers:1 positioned:1 actually:1 originally:1 supermodular:1 response:1 rand:3 done:1 generality:2 p6:6 correlation:9 sketch:3 hand:2 undermine:1 christopher:1 propagation:6 indicated:1 scientific:1 semisupervised:1 building:1 name:1 normalized:1 true:2 brown:1 cald:1 analytically:1 assigned:1 interchangeably:1 self:1 during:2 illustrative:1 coincides:1 criterion:21 generalized:1 performs:1 reasoning:2 harmonic:4 wise:4 novel:1 common:3 sigmoid:4 absorbing:1 ji:6 empirically:1 volume:1 extend:3 he:1 mellon:3 refer:1 cambridge:1 cv:1 yifei:1 schneide:1 submodular:6 funded:1 han:6 similarity:1 v0:7 etc:2 lbc:1 showed:2 optimizes:1 inequality:2 binary:7 vt:4 yi:9 wji:1 seen:4 guestrin:1 george:1 schneider:2 ey:2 determine:3 rv:6 encompass:1 trier:1 technical:2 cross:2 retrieval:2 laplacian:4 prediction:8 regression:8 cmu:3 metric:1 expectation:1 coauthor:1 kernel:1 normalization:1 represent:1 robotics:1 addition:1 want:1 krause:5 whereas:3 walker:4 leaving:1 singular:1 salary:1 umd:1 undirected:6 db:1 lafferty:1 effectiveness:1 call:1 near:2 automated:1 independence:2 fit:1 reduce:1 idea:1 andreas:1 knowing:2 shawn:1 expression:1 six:1 boast:1 remark:1 detailed:1 characterise:1 maybe:1 ten:1 generate:1 http:4 nsf:1 notice:2 sign:1 trapped:1 per:1 carnegie:3 write:2 four:1 shih:1 demonstrating:1 breadth:1 imaging:1 graph:29 relaxation:1 monotone:1 sum:3 inverse:5 uncertainty:2 decide:1 wu:2 electronic:1 draw:2 decision:2 appendix:5 comparable:2 submatrix:2 bound:2 ki:1 guaranteed:1 replaces:1 scaler:1 nonnegative:8 annual:1 strength:1 placement:1 constraint:1 optic:1 autonweb:2 bonn:2 aspect:1 u1:3 optimality:91 min:2 performing:1 martin:2 department:2 according:3 belonging:3 describes:1 lp:6 making:2 explained:1 outlier:3 restricted:1 intuitively:2 pr:1 taken:1 computationally:2 discus:1 available:2 apply:1 observe:2 spectral:3 simulating:2 xiong:1 alternative:4 batch:1 robustness:1 substitute:1 include:1 madison:1 calculating:1 ghahramani:3 especially:3 establish:2 prof:1 february:1 suddenly:1 society:1 objective:11 question:2 surrogate:5 enhances:1 nemhauser:3 friedland:3 win:1 link:2 simulated:2 seven:1 cauchy:1 l12:3 reason:2 code:1 index:2 insufficient:1 ratio:2 minimizing:4 kk:1 unfortunately:1 cij:1 trace:1 design:1 proper:2 unknown:1 l11:2 observation:5 markov:5 datasets:1 beat:1 ajit:1 david:2 namely:1 pair:1 toolbox:1 connection:1 raising:1 textual:1 address:1 plij:1 below:4 regularizor:1 including:1 max:3 natural:1 force:1 zhu:7 imply:1 catch:1 xiaojin:2 lij:5 prior:1 understanding:1 l2:9 literature:1 wisconsin:1 loss:12 bear:1 interesting:2 generation:2 wolsey:1 versus:2 var:2 degree:1 agent:1 gather:1 sufficient:2 consistent:1 xiao:1 principle:1 obscure:1 row:1 compatible:1 surprisingly:1 last:1 free:9 rasmussen:2 side:1 understand:1 deeper:1 institute:1 explaining:1 fall:1 taking:1 zhenguo:1 world:7 stand:2 unweighted:2 commonly:1 made:2 reinforcement:1 citation:6 uni:2 monotonicity:2 suppressor:14 global:8 active:29 continuous:2 latent:1 search:2 why:1 additionally:1 nature:1 career:1 expansion:1 bearing:1 anthony:1 garnett:3 da:3 vj:2 diag:3 rh:3 motivation:2 profile:1 slow:1 fails:1 exponential:1 candidate:2 jmlr:1 third:1 theorem:7 formula:1 specific:1 gaubert:3 showing:1 concern:2 intractable:2 workshop:1 adding:2 sequential:3 corr:3 importance:1 execution:1 budget:1 conditioned:3 margin:2 dblp:5 easier:1 surprise:1 smoothly:1 eij:1 likely:1 hitting:2 partially:1 cvv:3 u2:1 chang:1 corresponds:1 satisfies:2 acm:2 ma:2 conditional:5 careful:1 jeff:2 fisher:1 content:2 onestep:1 man:1 lemma:10 principal:3 called:1 coauthorship:4 select:2 formally:3 college:1 latter:2 phenomenon:1 correlated:3 |
4,367 | 4,952 | Bayesian optimization explains human active search
Ali Borji
Department of Computer Science
USC, Los Angeles, 90089
[email protected]
Laurent Itti
Departments of Neuroscience and Computer Science
USC, Los Angeles, 90089
[email protected]
Abstract
Many real-world problems have complicated objective functions. To optimize
such functions, humans utilize sophisticated sequential decision-making strategies. Many optimization algorithms have also been developed for this same purpose, but how do they compare to humans in terms of both performance and behavior? We try to unravel the general underlying algorithm people may be using
while searching for the maximum of an invisible 1D function. Subjects click on
a blank screen and are shown the ordinate of the function at each clicked abscissa
location. Their task is to find the function?s maximum in as few clicks as possible.
Subjects win if they get close enough to the maximum location. Analysis over
23 non-maths undergraduates, optimizing 25 functions from different families,
shows that humans outperform 24 well-known optimization algorithms. Bayesian
Optimization based on Gaussian Processes, which exploits all the x values tried
and all the f (x) values obtained so far to pick the next x, predicts human performance and searched locations better. In 6 follow-up controlled experiments
over 76 subjects, covering interpolation, extrapolation, and optimization tasks, we
further confirm that Gaussian Processes provide a general and unified theoretical
account to explain passive and active function learning and search in humans.
1
Introduction
To find the best solution to a complex real-life search problem, e.g., discovering the best drug to treat
a disease, one often has few chances for experimenting, as each trial is lengthy and costly. Thus,
a decision maker, be it human or machine, should employ an intelligent strategy to minimize the
number of trials. This problem has been addressed in several fields under different names, including
active learning [1], Bayesian optimization [2, 3], optimal search [4, 5, 6], optimal experimental
design [7, 8], hyper-parameter optimization, and others. Optimal decision making algorithms show
significant promise in many applications, including human-machine interaction, intelligent tutoring
systems, recommendation systems, sensor placement, robotics control, and many more.
Here, inspired by the optimization literature, we design and conduct a series of experiments to
understand human search and active learning behavior. We compare and contrast humans with
standard optimization algorithms, to learn how well humans perform 1D function optimization and
to discover which algorithm best approaches or explains human search strategies. This contrast hints
toward developing even more sophisticated algorithms and offers important theoretical and practical
implications for our understanding of human learning and cognition.
We aim to decipher how humans choose the next x to be queried when attempting to locate the
maximum of an unknown 1D function. We focus on the following questions: Do humans perform
local search (for instance by randomly choosing a location and following the gradient of the function,
e.g., gradient descent), or do they try to capture the overall structure of the underlying function (e.g.,
polynomial, linear, exponential, smoothness), or some combination of both? Do the sets of sample
x locations queried by humans resemble those of some algorithms more than others? Do humans
follow a Bayesian approach, and if so which selection criterion might they employ? How do humans
balance between exploration and exploitation during optimization? Can Gaussian processes [9] offer
a unifying theory of human function learning and active search?
1
2
Experiments and Results
We seek to study human search behavior directly on 1D function optimization, for the first time
systematically and explicitly. We are motivated by two main reasons: 1) Optimization has been
intensively studied and today a large variety of optimization algorithms and theoretical analyses exist, 2) 1D search allows us to focus on basic search mechanisms utilized by humans, eliminating
real-world confounds such as context, salient distractors, semantic information, etc. A total of 99
undergraduate students with basic calculus knowledge from our university participated in 7 experiments. They were from the following majors: Neurosciences, Biology, Psychology, Kinesiology,
Business, English, Economics, and Political Sciences (i.e., not Maths or Engineering). Subjects had
normal or corrected-to-normal vision and were compensated by course credits. They were seated
behind a 42" computer monitor at a distance of 130 cm (subtending a field of view of 43? ? 25? ).
The experimental methods were approved by our university?s Institutional Review Board (IRB).
Block 1 (TRAINING)
1
89
80
70
function no. 1
Total Reward: 11
Tries (penalty)= 9
Hit , pls click on the continue botton for the next.
7
5
1
6
60
50
click here to terminate.
90
Function Value
2.1 Experiment 1: Optimization 1
Participants were 23 students (6 m, 17
f) aged 18 to 22 (19.52 ? 1.27 yr).
0.8
Original function
Histogram of clicks
Histogram of first clicks
0.6
y
0.4
Stimuli were a variety of 25 1D
functions with different characteristics
0.2
(linear/non-linear, differentiable/non0
?300
?200
?100
0
100
200
300
differentiable, etc.), including: Polyx
nomial, Exponential, Gaussian, Dirac, Figure 1: Left: a sample search trial. The unknown function (blue curve)
Sinc, etc. The goal was to cover many was only displayed at the end of training trials. During search for the function?s
maximum, a red dot at (x, f (x)) was drawn for each x selected by participants.
cases and to investigate the generaliza- Right:
A sample function and the pdf of human clicks.
tion power of algorithms and humans.
To generate a polynomial stimulus of degree m (m > 2), we randomly generated m + 1 pairs
of (x, y) points and fitted a polynomial to them using least squares regression. Coefficients were
saved for later use. Other functions were defined with pre-specified formulas and parameters (e.g.,
Schwefel, Psi). We generated two sets of stimuli, one for training and the other for testing. The x
range was fixed to [?300 300] and the y range varied depending on the function. Fig. 1 shows a
sample search trial during training, as well as smoothed distribution of clicked locations for first
clicks, and progressively for up to 15 clicks. In the majority of cases, the distribution of clicks starts
with a strong leftward bias for the first clicks, then progressively focusing around the true function
maximum as subjects make more clicks and approach the maximum. Subjects clicked less on
smooth regions and more on spiky regions (near local maxima). This indicates that they sometimes
followed the local gradient direction of the function.
40
4
2
30
3
20
10
0
?300
x-tolerance
?200
?100
0
100
200
300
Selection Panel
Procedure. Subjects were informed about the goal of the experiment. They were asked to ?find the
maximum value (highest point) of a function in as few clicks as possible?. During the experiment,
each subject went through 30 test trials (in random order). Starting from a blank screen, subjects
could click on any abscissa x location, and we would show them the corresponding f (x) ordinate.
Previously clicked points remained on the screen until the end of the trial. Subjects were instructed
to terminate the trial when they thought they had reached the maximum location within a margin
of error (xTolh =6) shown at the bottom of the screen (small blue line in Fig. 1). This design was
intentional to both obtain information about the human satisficing process and to make the comparison fair with algorithms (e.g., as opposed to automatically terminating a trial if humans happened to
click near the maximum). We designed the following procedure to balance speed vs. accuracy. For
each trial, a subject gained A points for ?HIT?, lost A points for ?MISS?, and lost 1 point for each
click. Scores of subjects were kept on the record, to compete against other subjects. The subject
with the highest score was rewarded with a prize. We used A = 10 (for 13 subjects) and A = 20
(for 10 subjects); since we did not observe a significant difference across both conditions, here we
collapsed all the data. We highlighted to subjects that they should decide carefully where to click
next, to minimize the number of clicks before hitting the maximum location. They were not allowed
to click outside the function area. Before the experiment, we had a training session in which subjects
completed 5 trials on a different set of functions than those used in the real experiment. The purpose
was for subjects to understand the task and familiarize themselves to the general complexity and
shapes of functions (i.e., developing a prior). We revealed the entire function at the end of each
training trial only (not after test trials). The maximum number of clicks was set to 40. To prohibit
subjects from using the vertical extent of the screen to guess the maximum location, we randomly
elevated or lowered the function plot. We also recorded the time spent on each trial.
2
hit rate
function call
Human Results. On average, over all 25 functions and 23 subjects, subjects attempted 12.8 ? 0.4
tries to reach the target. Average hit rate (i.e., whether subjects found the maximum) over all trials was 0.74 ? 0.04. Across subjects, standard deviations of the number of tries and hit rates
were 3.8 and 0.74. Relatively low values here suggest inter-subject consistency in our task.
Each trial lasted about 22 ? 4 seconds. Figure 2 shows examHard
Easy
ple hard and easy stimuli (f n are function numbers, see Supplef16
f20
ment). The Dirac function had the most clicks (16.5), lowest hit
f2
f17
rate (0.26), and longest time (32.4?18.8 s). Three other most diff8 f12
f7
ficult functions, in terms of function calls were, listed as (function
f20
f22
number, number of clicks): {(f2,15.8), (f8,15.2), (f12,15.1)}. The
f17
easiest ones were: {(f16,9.3), (f20, 9.9), (f17, 10)}. For hit rate,
f24
f15
the hardest functions were: {(f24,0.35), (f15,0.45), (f7,0.56)},
Figure
2:
Difficult and easy stimuli.
and the easiest ones: {(f20,1), (f17,1), (f22,0.95)}. Subjects were
faster on Gaussian (f 17) and Exponential (f 20) functions (16.2 and 16.9 seconds).
Model-based Results. We compared human data to 24 Table 1: Baseline algorithms. We set maxItr
well-established optimization algorithms. These base- to 500 when the only parameter is xTolm .
line methods employ various search strategies (local, Algorithm
Type Parameters
global, gradient-based, etc.) and often have several pa- FminSearch [10] Loc. xTolm = 0.005:0.005:0.1
L
xTolm = 5e-7:1e-6:1e-5
rameters (Table 1). Here, we emphasize one to two FminBnd
FminUnc
L
xTolm = 0.01:0.05:1
of the most important parameters for each algorithm. minFunc-#
L
xTolm = 0.01:0.05:1
L
xTolm = 0.001:0.001:0.005
The following algorithms are considered: local search GD
= 0.1:0.1:0.5; tol = 1e-6
(e.g., Nelder-Mead simplex/FminSearch [10]); multi- mult-FminSearch Glob. ?
xTolm = 0.005, starts = 1:10
start local search; population-based (e.g., Genetic Al- mult-FminBnd G
xTolm = 5e-7, starts = 1:10
xTolm = 0.01, starts = 1:10
gorithms (GA) [12, 13], Particle Swarm Optimization mult-FminUnc G
[11]
G
pop = 1:10; gen = 1:20
(PSO) [11]); DIvided RECTangles (DIRECT) [15]; and PSO
GA [12, 13]
G
pop and gen = 5:10:100
Bayesian Optimization (BO) techniques [2, 3]. BO
generation gap = 0.01
G
stopTemp = 0.01:0.05:1
constructs a probabilistic model for f (?) using all previ- SA [14]
? = 0.1:0.1:1
ous observations and then exploits this model to make DIRECT [15]
G
maxItr = 5:5:70
decisions about where along X to evaluate the function Random
G
maxItr = 5:5:150
G
maxItr = 5:5:35
next. This results in a procedure that can find the max- GP [2, 3]
imum of difficult non-convex, non-differentiable functions with relatively few evaluations, at the
cost of performing more computation to determine the next point to try. BO methods are based on
Gaussian processes (GP) [9] and several selection criteria. Here we use a GP with a zero mean
prior and a RBF covariance kernel. Two parameters of the kernel function, kernel width and signal
variance, are learned from our training functions. We consider 5 types of selection criteria for BO:
Maximum Mean (MM) [16], Maximum Variance (MV) [17], Maximum Probability of Improving
(MPI) [18, 19], Maximum Expected Improvement (MEI)[20, 21], and Upper Confidence Bounds
(UCB) [22]. Further, we consider two BO methods by Osborne et al. [23], with and without gradient information (See supplement). To measure to what degree human search behavior deviates from
a random process, we devise a Random search algorithm which chooses the next point uniformly
random from [?300 300] without replacement. We also run the Gradient Descent (GD) algorithm
and its descendants denoted as minFunc-# in Table 1 where # refers to different methods (conjugate
gradient (cg), quasi-Newton (qnewton), etc.).
To evaluate which algorithm better explains human 1D search behavior, we propose two measures:
1) an algorithm should have about the same performance, in terms of hit rate and function calls, as
humans (1st-level analysis), and 2) it should have similar search statistics as humans, for example in
terms of searched locations or search order (2nd-level analysis). For fair human-algorithm comparison, we simulate for algorithms the same conditions as in our behavioral experiment, when counting
a trial as a hit or a miss (e.g., using same xTolh ). It is worth noting that in our behavioral experiment
we did our best not to provide information to humans that we cannot provide to algorithms.
In the 1st-level analysis, we tuned algorithms for their best accuracy by performing a grid search
over their parameters to sample the hit-rate vs. function-calls plane. Table 1 shows two stopping
conditions that are considered: 1) we either run an algorithm until a tolerance on x is met (i.e.,
|xi?1 ? xi | < xTolm ), or 2) we allow it to run up to a variable (maximum) number of function calls
(maxItr). For each parameter setting (e.g., a specified population size and generations in GA), since
each run of an algorithm may result in a different answer, we run it 200 times to reach a reliable
estimate of its performance. To generate a starting point for algorithms, we randomly sampled from
3
the distribution of human first clicks (over all subjects and functions, p(x1 ); see Fig. 1). As in
the behavioral experiment, after termination of an algorithm, a hit is declared when: ? xi ? B :
|xi ? argmaxx f (x)| ? xTolh , where set B includes the history of searched locations in a search
trial. Fig. 3 shows search accuracy of optimization algorithms. As shown, humans are better than
all algorithms tested, if hit rate and function calls are weighed equally (i.e., best is to approach the
bottom-right corner of Fig. 3). That is, undergraduates from non-maths majors managed to beat the
state of the art in numerical optimization. BO algorithms with GP-UCB and GP-MEI criteria are
closer to human performance (so are GP-Osborne methods). The DIRECT method did very well
and found the maximum with ? 30 function calls. It can achieve better-than-human hit rate, with
a number of function calls which is smaller than BO algorithms, though still higher than humans
(it was not able to reach human performance with equal number of function calls). As expected,
some algorithms reach human accuracy but with much higher number of function calls (e.g., GA,
mult-start-#), sometimes by up to 3 orders of magnitude.
We chose the following promising algorithms for the 2nd-level analysis: DIRECT, GP-Osborne-G,
GP-Osborne, GP-MPI, GP-MUI, GP-MEI, GP-UCB, GP-UCB-Opt, GP-MV, PSO, and Random.
GP-UCB-Opt is basically the same as GP-UCB with its exploration/exploitation parameter (? in
?x + ??x ; GP mean + GP std) learned from train data for each function. These algorithms were
chosen because their performance curve in the first analysis intersected a window where accuracy is
half of humans and function call is twice as humans (black rectangle in Fig. 3). We first find those
parameters that led these algorithms to their closest performance to humans. We then run them again
and this time save their searched locations for further analysis.
We design 4 evaluation scores to quantify similarities between algorithms and humans on each
function: 1) mean sequence distance between
an algorithm?s searched locations and human
searched locations, in each trial for the first 5 10 2
clicks, 2) mean shortest distance between an
algorithm?s searched locations and all human
clicks (i.e., point matching), 3) agreement between probability distributions of searched locations by all humans and an algorithm, and 4)
agreement between pdfs of normalized step sizes 1
Human
10
(to [0 1] on each trial). Let pm (t) and ph (t) be
pdfs of the search statistic t by an algorithm and
humans, respectively. The agreement between
pm and ph is defined as pm (argmaxt ph (t))
(i.e., the value of an algorithm?s pdf at the loca0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
hit rate
tion of maximum for human pdf). Median scores
(over all 25 functions) are depicted in Fig. 4. Dis- Figure 3: Human vs. algorithm 1D search accuracy.
tance score is lower for Bayesian models compared to DIRECT, Random, PSO, and GP-Osborne algorithms (Fig. 4.a). Point matching distances are lower for GP-MPI, and GP-UCB (Fig. 4.b). These
two algorithms also show higher agreement to humans in terms of searched locations (Fig. 4.c). The
clearest pattern happens over step size agreement with BO methods (except GP-MV) being closest
to humans (Fig. 4.d). GP-MPI and GP-UCB show higher resemblance to human search behavior
over all scores. Further, we measure the regret of algorithms and humans defined as fmax (?) ? f ?
where f ? is the best value found so far for up to 15 function calls averaged over all trials. As shown
in Fig. 4.e, BO models approach the maximum of f (?) as fast as humans. Hence, although imperfect,
BO algorithms overall are the most similar to humans, out of all algorithms tested.
FminSearch
FminBnd
Fminunc
PSO
multFminS
multFminB
multFminU
GA
GD
SA
Random
function calls
GP?Osborne
GP?Osborne?G
Direct
GP?MUI
minFunc?cg
minFunc?csd
minFunc?sd
minFunc?qnewton
minFunc?lbfgs
GP?MM
GP?MPI
GP?UCB-Opt
GP?UCB
GP?MEI
GP?MV
Three reasons prompt us to consider BO methods as promising candidates for modeling human
basic search: 1) BO methods perform efficient search in a way that resembles human behavior in
terms of accuracy and search statistics (results of Exp. 1), 2) BO methods exploit GP which offers
a principled and elegant approach for adding structure to Bayesian models (in contrast to purely
data-driven Bayesian). Furthermore, the sequential nature of the BO and updating the GP posterior
after each function call seems a natural strategy humans might be employing, and 3) GP models
explain function learning in humans over simple functional relationships (linear and quadratic) [24].
Function learning and search mechanisms are linked in the sense that, to conduct efficient search,
one needs to know the search landscape and to progressively update one?s knowledge about it.
4
GP-MEI
GP-MPI
GP-MUI
GP-MV
GP-UCB-Opt
DIRECT
GP-UCB
GP?Osborne
GP?Osbo.?G
GP-MEI
GP-MPI
GP-MUI
GP-MV
GP-UCB-Opt
DIRECT
GP-UCB
regret (fmax - f*)
0.6
0.4
0.2
0
PSO
Random
GP?Osborne
GP?Osb.?G
DIRECT
GP-UCB
GP-MEI
GP-MPI
GP-MUI
GP-MV
GP-UCB-Opt
Human
0.6
0.5
0.4
0.3
0.2
DIRECT
0
0.7
GP-UCB
GP-MEI
GP-MPI
GP-MUI
GP-MV
GP-UCB-Opt
0.1
0.8
GP?Osborne
0.2
e
1
0.8
GP?Osbo.?G
0.3
agreement on search step sizes
0.4
1
GP?Osborne
GP?Osbo.?G
FminSearch
DIRECT
GP-UCB
GP-MEI
GP-MPI
GP-MUI
GP-MV
GP-UCB-Opt
GP?Osborne
GP?Osbo.?G
FminBnd
PSO
Random
0.5
FminBnd
PSO
Random
1
d
0.6
FminSearch
2
0
100
FminSearch
agreement on searched locations
150
3
FminBnd
PSO
Random
point matching distance
200
c
4
FminBnd
PSO
Random
sequence distance
250
FminSearch
b
a
0.1
2
4
6
8
10
12
14
function call
Figure 4: Results of our second-level analysis. The lower the distance and the higher the agreement, the better
(red arrows). Boxes represent median (red line) and 25 th, 75 th percentiles. Panel (e) shows average regret of
algorithms and humans (f ? is normalized to fmax for each trial separately).
We thus designed 6 additional controlled experiments to further explore GP as a unified computational principle guiding human function learning and active search. In particular, we investigate the
basic idea that humans might be following a GP, at least in the continuous domain, and change some
of its properties to cope with different tasks. For example, humans may use GP to choose the next
point to dynamically balance exploration vs. exploitation (e.g., in search task), or to estimate the
function value of a point (e.g., in function interpolation). In experiments 2 to 5, subjects performed
interpolation and extrapolation tasks, as well as active versions of these tasks by choosing points to
help them learn about the functions. In experiments 6 and 7, we then return to the optimization task,
for a detailed model-based analysis of human search behavior over functions from the same family.
Note that many real-world problems can be translated into our synthetic tasks here.
We used polynomial functions of degree 2, 3, and 5 as our stimuli (denoted as Deg2, Deg3, and
Deg5, respectively). Two different sets of functions were generated for training and testing, shown in
Fig. 5. For each function type, subjects completed 10 training trials followed by 30 testing trials. As
in Exp. 1, function plots were disclosed to subjects only during training. To keep subjects engaged,
in addition to the competition for a prize, we showed them the magnitude of error during both
training and testing sessions. In experiments 2 to 6, we fitted a GP to different types of functions
using the same set of (x, y) points shown to subjects during training (Fig. 6). A grid search was
conducted to learn GP parameters from the training functions to predict subjects? test data.
2.2 Experiments 2 & 3: Interpolation and Active Interpolation
Participants. Twenty subjects (7m, 13f) aged 18 to 22 participated (mean: 19.75 ? 1.06 yr).
Procedure. In the interpolation task, on each function, subjects were shown 4 points x ?
{?300, a, b, 300} along with their f (x) values. Points a and b were generated randomly once in
advance and were then tied to each function. Subjects were asked to guess the function value at the
center (x = 0) as accurately as possible. In the active interpolation task, the same 4 points as in interpolation were shown to subjects. Subjects were first asked to choose a 5th point between [?300 300]
to see its y = f (x) value. They were then asked to guess the function value at a randomly-chosen
6th x location as accurately as possible. Subjects were instructed to pick the most informative fifth
point regarding estimating the function value at the follow-up random x (See Fig. 6).
Results. Fig. 7.a shows mean distance of human clicks from the GP mean at x = 0 over test trials (averaged over absolute pairwise distances between clicks and the GP) in the interpolation task.
Human errors rise as functions become more complicated. Distances of the GP and the actual function from humans are the same over Deg2 and Deg3 functions (no significant difference in medians,
Wilcoxon signed-rank test, p > 0.05). Interestingly, on Deg5 functions, GP is closer to human
clicks than the actual function (signed-rank test, p = 0.053) implying that GP captures clicks well
in this case. GP did fit the human data even better than the actual function, thereby lending support
to our hypothesis that GP may be a reasonable approximation to human function estimation.
Could it be that subjects locally fit a line to the two middle points to guess f (0)? To evaluate
this hypothesis, we measured the distance, at x = 0, from human clicks to a line passing through
(a, f (a)) and (b, f (b)). By construction of our stimuli, a line model explains human data well on
Deg3 and Deg5, but fails dramatically on Deg2 which deflect around the center. GP is significantly
better than the line model on Deg2 (p < 0.0005), while being as good on Deg3 and Deg5 (p = 0.78).
Deg2-Train
100
Deg2-Test
100
Deg3-Train
100
Deg3-Test
100
Deg5-Train
100
Deg5-Test
100
80
80
80
80
80
80
60
60
60
60
60
40
40
40
40
40
20
20
20
20
20
y
60
40
0
?300
?200
?100
0
x
100
200
0
300 ?300
?200
?100
0
x
100
200
300
0
?300
?200
?100
0
x
100
200
300
0
?300
?200
?100
0
x
100
200
300
0
?300
20
?200
?100
0
x
100
200
0
300 ?300
Figure 5: Train and test polynomial stimuli used in experiments 2 through 6.
5
?200
?100
0
x
100
200
300
Active Interpolation
GP mean
GP std
Actual func
y
80
human clicks
(mean + std)
GP-MV
GP-UCB
GP-MPI
GP-MEI
50
polynomial deg 3
40
20
0
x
100
200
mean human
click 2
80
80
60
60
mean human
click 1
300 ?300 ?200 ?100
0
100
200
0
300 ?300 ?200 ?100
40
0
100
200
Optimization 3
100
100
line
50
Optimization 2
100
100
0
?300 ?200 ?100
Active Extrapolation
deg 2
150
human
shuffled rand
rand
max variance
60
0
Extrapolation
200
100
mean human
mean GP
Interpolation
100
50
40
20
0
20
300 ?300 ?200 ?100
0
100
200
human clicks
histogram of clicks
0
0
300 ?300 ?200 ?100
0
100
200
300
?50
?300
?200
?100
0
100
200
300
Figure 6: Illustration of experiments. In extrapolation, polynomials (degrees 1, 2 & 3) fail to explain our data.
Another possibility could be that humans choose a point randomly on the y axis, thus discarding the
shape of the function. To control for this, we devised two random selection strategies. The first one
uniformly chooses y values between 0 and 100. The second one, known as shuffled random (SRand),
takes samples from the distribution of y values selected by other subjects over all functions. The
purpose is to account for possible systematic biases in human selections. We then calculate the
average of the pairwise distances between human clicks and 200 draws from each random model.
Both random models fail to predict human answers on all types of functions (significantly worse
than GP, signed-rank test ps < 0.05). One advantage of the GP over other models is providing a
level of uncertainty at every x. Fig. 7.a (inset) demonstrates similar uncertainty patterns for humans
and the GP, showing that both uncertainties (at x = 0) rise as functions become more complicated.
Interpolation results suggest that humans try to capture the shape of functions. If this is correct,
we expect that humans will tend to click on high uncertainty regions (according to GP std) in the
active interpolation task (see Fig. 6 for an example). Fig. 7.b shows the average of GP standard
deviation at locations of human selections. Humans did not always choose x locations with the
highest uncertainty (shown in red in Fig. 7.b). One reason for this might be that several regions had
about the same std. Another possibility is because subjects had slight preference to click toward
center. However, GP std at human-selected locations was significantly higher than the GP std at
random and SRand points, over all types of functions (signed-rank test, ps < 1e?4; non-significant
on Deg2 vs. SRand p = 0.18). This result suggests that since humans did not know in advance
where a follow-up query might happen, they chose high-uncertainty locations according to GP, as
clicking at those locations would most shrink the overall uncertainty about the function.
2.3 Experiments 4 & 5: Extrapolation and Active Extrapolation
Participants. 16 new subjects (7m, 9f) completed experiments 4 and 5 (Age: 19.62 ? 1.45 yr).
Procedure. Three points x ? {?300, c, 100} and their y values were shown to subjects. Point c
was random, specific to each function. In the extrapolation task, subjects were asked to guess the
function value at x = 200 as accurately as possible (Fig. 6). In the active extrapolation task, subjects
were asked to choose the most informative 4th point in [?300 100] regarding estimating f (200).
GP
35
12
18
Human
GP
25
6
15
randY
shuffledRandY
5
Deg2
Deg3
Deg5
8
human
random
shuffled Random
max std
min std
4
0
Deg2
Deg3
Deg5
d) Active Extrapolation
randY
shuffledRandY
30
19
20
Human
10
actual func
GP
line
GP
10
Deg2
Deg3
Deg5
mean standard deviation of chosen locations
line
16
c) Extrapolation
40
STD
actual func
mean distance of human selections from a model
45
b) Active Interpolation
mean standard deviation of chosen locations
a) Interpolation
STD
mean distance of human selections from a model
Results. A similar analysis as in the interpolation task is conducted. As seen in Fig. 7.c, in alignment
with interpolation results, humans are good at Deg2 and Deg3 but fail on Deg5, and so does the GP
model. Here again, with Deg5, GP and humans are closer to each other than to the actual function,
further suggesting that their behaviors and errors are similar. There is no significant difference
between GP and the actual functions over all three function types (signed-rank test; p > 0.25).
Interestingly, a line model fitted to points c and 100 is impaired significantly (p < 1e?5 vs. GP)
over all function types (Fig. 6). Both random strategies also performed significantly worse than GP
on this task (signed-rank test; ps < 1e?6). SRand performs better than uniform random, indicating
16
14
human
random
shuffled Random
max std
min std
12
10
8
6
4
2
0
Deg2
Deg3
Deg5
Figure 7: a) Mean distance of human clicks from models. Errors bars show standard error of the mean (s.e.m)
over test trials. Inset shows the standard deviation of humans and the GP model at x = 0. b) mean GP std at
human vs. random clicks in active interpolation. c and d correspond to a and b, for the extrapolation task.
6
a) distance of human clicks
from location of max Q
b) distance of random clicks
from location of max Q
c) left: mean selected function values
right: mean selected GP values
0.8
100
100
180
d) mean selected
GP std values
180
rand ? mean
human1 ? mean
140
human1 ? std
rand ? std
140
0.7
90
SRand?mean
human2 ? mean
100
100
80
60
60
70
0.6
human1
human2
SRand?std
human2 ? std
human1
human2
0.5
0.4
random
shuffled Random
0.3
20
20
Deg2
Deg3
Deg5
random
60
Deg2
Deg3
Deg5
shuffled Random
0.2
Deg2
Deg3
Deg5
Deg2
Deg3
Deg5
Deg2
Deg3
Deg5
Figure 8: Results of the optimization task 2. a and b) distance of human and random clicks from locations of
max Q (i.e., GP mean and max GP std). c) actual function and GP mean values at human and random clicks.
d) normalized GP standard deviation at human vs. random clicks. Errors bars show s.e.m over test trials.
existence of systematic biases in human clicks. Subjects learned that f (200) did not happen on
extreme lows or highs (same argument is true for f (0) in interpolation). As in the interpolation
task, GP and human standard deviations rise as functions become more complex (Fig. 7.c; inset).
Active extrapolation (Fig. 7.d), similar to active interpolation, shows that humans tended to choose
locations with significantly higher uncertainty than uniform and SRand points, for all function types
(ps < 0.005). Some subjects in this task tended to click toward the right (close to 100), maybe
to obtain a better idea of the curvature between 100 and 200. This is perhaps why the ratio of
human std to max std is lower in active extrapolation compared to active interpolation (0.75 vs.
0.82), suggesting that maybe humans used an even more sophisticated strategy on this task.
2.4 Experiment 6: Optimization 2
Participants were another 21 subjects (4m, 17f) in the age range of 18 to 22 (mean: 20 ? 1.18).
Procedure. Subjects were shown function values at x ? {?300, ?200, 200, 300} and were asked
to find the x location where they think the function?s maximum is. They were allowed to make two
equally important clicks and were shown the function value after each one. For quadratic functions,
we only used 13 concave-down functions that have one unique maximum.
Results. We perform two analyses shown in Fig. 8. In the first one, we measure the mean distance
of human clicks (1st and 2nd clicks) from the location of the maximum GP mean and maximum GP
standard deviation (Fig. 8.a). We updated the GP after the first click.
We hypothesized that the human first click would be at a location of high GP variance (to reduce
uncertainty about the function), while the second click would be close to the location of highest GP
mean (estimated function maximum). However, results showed that human 1st clicks were close to
the max GP mean and not very close to the max GP std. Human 2nd clicks were even closer (signedrank test, p < 0.001) to the max GP mean and further away from the max GP std (p < 0.001).
These two observations together suggest that humans might have been following a Gaussian process
with a selection criterion heavily biased towards finding the maximum, as opposed to shrinking the
most uncertain region. Repeating this analysis for random clicks (uniform and SRand) shows quite
the opposite trend (Fig. 8.b). Random locations are further apart from maximum of the GP mean
(compared to human clicks) while being closer to the maximum of the GP std point (compared
to human clicks). This cross pattern between human and random clicks (wrt. GP mean and GP
std) shows a systematic search strategy utilized by humans. Distances of human clicks from the
max GP mean and max GP std rise as functions become more complicated. In the second analysis
(Fig. 8.c), we measure actual function and GP values at human and random clicks. Humans had
significantly higher function values at their 2nd clicks; p < 1e?4 (so was true using GP; p < 0.05).
Values at random points are significantly lower than human clicks. Humans were less accurate as
functions became more complex, as indicated by lower function values. Finally, Fig. 8.d shows that
humans chose points with significantly less std (normalized to the entire function) in their 2nd clicks
compared to random and first clicks. Human 1st clicks have higher std than uniform random clicks.
2.5 Experiment 7: Optimization 3
Participants were 19 new subjects (6m, 13f) in the age range of 19 to 25 (mean: 20.26 ? 1.64 yr).
Stimuli. Functions were sampled from a Gaussian process with predetermined parameters to assure
functions come from the same family and resemble each other (as opposed to Exp. 1; See Fig. 6).
Procedure was the same as in Exp. 1. Number of train and test trials, in order, were 10 and 20.
7
mean distance from max GP mean and std
1
b)
Results. Subjects had average accuracy of 250 a)
0.76 ? 0.11 (0.5 ? 0.18 on train) over all
0
10
0. 8
subjects and functions, and average clicks 200
of 8.86 ? 1.12 (7.15 ? 1.2 on train) before ending a search trial. To investigate 150
0. 6
?1
10
the sequential strategy of subjects, we progressively updated a GP using a subject?s
0. 4
?2
clicks on each trial, and exploited this GP 100
10
0
5
10
15
to evaluate the next click of the same sub0. 2
50
ject. In other words, we attempted to know
normalized GP mean
human?
max
GP
mean
to what degree a subject follows a GP. Renormalized GP std
human? max GP std
0
0
sults are shown in Fig. 9. The regret of the
1
3
5
7
9
11 13 15
0
5
10
15
number of clicks
number of clicks
GP model and humans decline with more
Figure 9: Exploration vs. exploitation balance in Optimization 3 task.
clicks, implying that humans chose informative clicks regarding optimization (figure inset). Humans converge to the maximum location
slightly faster than a GP fitted to their data, and much faster than random.
Fig. 9.a shows that subjects get closer to the location of maximum GP mean and further away from
max GP std (for 15 clicks). Fig. 9.b shows the normalized mean and standard deviation of human
clicks (from the GP model), averaged over all trials. At about 6.4 clicks, subjects are at 58% of
the function maximum while they have reduced the variance by 42%. Interestingly, we observe
that humans tended to click on higher uncertainty regions (according to GP) in their first 6 clicks
(average over all subjects and functions), then gradually relying more on the GP mean (i.e., balancing
exploration vs. exploitation). Results of optimization tasks suggest that human clicks during search
for a maximum of a 1D function can be predicted by a Gaussian process model.
3
regret
human
random
shuffled random
GP
Discussion and Conclusion
Our contributions are twofold: First, we found a striking capability of humans in 1D function optimization. In spite of the relative naivety of our subjects (not maths or engineering majors), the high
human efficiency in our search task does open the challenge that even more efficient optimization
algorithms must be possible. Additional pilot investigations not shown here suggest that humans
may perform even better in optimization when provided with first and second derivatives. Following
this road may lead to designing efficient selection criteria for BO methods (for example new ways to
augment gradient information with BO). However, it remains to be addressed how our findings scale
up to higher dimensions and benchmark optimization problems. Second, we showed that Gaussian
processes provide a reasonable (though not perfect) unifying theoretical account of human function
learning, active learning, and search (GP plus a selection strategy). Results of experiments 2 to 5
lead to an interesting conclusion: In interpolation and extrapolation tasks, subjects try to minimize
the error between their estimation and the actual function, while in active tasks they change their
objective function to explore uncertain regions. In the optimization task, subjects progressively sample the function, update their belief and use this belief again to find the location of maximum. (i.e.,
exploring new parts of the search space and exploiting parts that look promising).
Our findings support previous work by Griffiths et al. [24] (also [25, 26, 27]). Yet, while they
showed that Gaussian processes can predict human errors and difficulty in function learning, here
we focused on explaining human active behavior with GP, thus extending explanatory power of GP
one step ahead. One study showed promising evidence that our results may extend to a larger class of
natural tasks. Najemnik and Geisler [6, 28, 29] proposed a Bayesian ideal-observer model to explain
human eye movement strategies during visual search for a small Gabor patch hidden in noise. Their
model computes posterior probabilities and integrates information across fixations optimally. This
process can be formulated with BO with an exploitative search mechanism (i.e., GP-MM). Castro
et al. [30] studied human active learning on the well-understood problem of finding the threshold
in a binary search problem. They showed that humans perform better when they can actively select
samples, and their performance is nearly optimal (below the theoretical upper bound). However,
they did not address how humans choose the next best sample. One aspect of our study which we
did not elaborate much here, is the satisficing mechanisms humans used in our task to decide when
to end a trial. Further modeling of our data may be helpful to develop new stopping criteria for
active learning methods. Related efforts have studied strategies that humans may use to quickly find
an object (i.e., search, active vision) [31, 32, 33, 34, 35], optimal foraging [36], and optimal search
theories [4, 5], which we believe could all now be revisited with GP as an underlying mechanism.
Supported by NSF (CCF-1317433, CMMI-1235539) and ARO (W911NF-11-1-0046, W911NF-12-1-0433).
8
References
[1] B. Settles, ?Active learning.,? in Morgan & Claypool, 2012. 1
[2] D. Jones, ?A taxonomy of global optimization methods based on response surfaces.,? Journal of Global
Optimization, vol. 21, pp. 345?383, 2001. 1, 3
[3] E. Brochu, M. Cora, and N. de Freitas, ?A tutorial on bayesian optimization of expensive cost functions,
with application to active user modeling and hierarchical reinforcement learning,? Tech R., 2009. 1, 3
[4] L. D. Stone, ?The theory of optimal search,? 1975. 1, 8
[5] B. O. Koopman, ?The theory of search. i. kinematic bases.,? Operations Research, vol. 4, 1956. 1, 8
[6] J. Najemnik and W. S. Geisler, ?Optimal eye movement strategies in visual search.,? Nature, 2005. 1, 8
[7] W. B. Powell and I. O. Ryzhov in Optimal Learning. (J. Wiley and Sons., eds.), 2012. 1
[8] V. V. Fedorov, Theory of Optimal Experiments. Academic Press, 1972. 1
[9] C. E. Rasmussen and C. K. I. Williams, ?Gaussian processes for machine learning.,? in MIT., 2006. 1, 3
[10] J. A. Nelder and R. Mead, ?A simplex method for function minimization,? Computer Journa, 1965. 3
[11] J. Kennedy and R. Eberhart, ?Particle swarm optimization.,? in proc. IEEE ICNN., 1995. 3
[12] J. H. Holland, ?Adoption in natural and artificial systems.,? 1975. 3
[13] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning. 1989. 3
[14] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, ?Optimization by simulated annealing.,? Science., 1983. 3
[15] D. Finkel, DIRECT Optimization Algorithm User Guide. 2003. 3
[16] A. Moore, J. Schneider, J. Boyan, and M. S. Lee, ?Q2: Memory-based active learning for optimizing
noisy continuous functions.,? in ICML, pp. 386?394, 1998. 3
[17] D. Lewis and W. Gale, ?A sequential algorithm for training text classifiers.,? in In Proc. ACM SIGIR
Conference on Research and Development in Information Retreival, 1994. 3
[18] H. J. Kushner, ?A new method of locating the maximum of an arbitrary multipeak curve in the presence
of noise.,? J. Basic Engineering, vol. 86, pp. 97?106, 1964. 3
[19] J. Elder, ?Global rd optimization when probes are expensive: the grope algorithm,? in IEEE International
Conference on Systems, Man and Cybernetics, 1992. 3
[20] J. Mockus, V. Tiesis, and A. Zilinskas, ?The application of bayesian methods for seeking the extremum.,?
in Towards Global Optimization. (L. Dixon and E. Szego, eds.), 1978. 3
[21] M. Locatelli, ?Bayesian algorithms for one-dimensional global optimization.,? Journal of Global Optimization., vol. 10, pp. 57?76, 1997. 3
[22] D. D. Cox and S. John, ?A statistical method for global optimization,? in In Proc. IEEE Conference on
Systems, Man and Cybernetics, 1992. 3
[23] M. Osborne, R. Garnett, and S. Roberts, ?Gaussian processes for global optimization,? in LION3, 2009. 3
[24] T. L. Griffiths, C. Lucas, J. J. Williams, and M. L. Kalish, ?Modeling human function learning with
gaussian processes.,? in NIPS, 2009. 4, 8
[25] B. R. Gibson, X. Zhu, T. T. Rogers, C. Kalish, and J. Harrison, ?Humans learn using manifolds, reluctantly,? in NIPS, 2010. 8
[26] J. D. Carroll, ?Functional learning: The learning of continuous functional mappings relating stimulus and
response continua.,? in Education Testing Service, Princeton, NJ, 1963. 8
[27] K. Koh and D. E. Meyer, ?Function learning: Induction of continuous stimulus-response relations,? Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 17, pp. 811?836, 1991. 8
[28] J. Najemnik and G. Geisler, ?Eye movement statistics in humans are consistent with an optimal search
strategy,? Journal of Vision, vol. 8, no. 3, pp. 1?14, 2008. 8
[29] W. S. Geisler and R. L. Diehl, ?A bayesian approach to the evolution of perceptual and cognitive systems.,?
Cogn. Sci., vol. 27, pp. 379?402, 2003. 8
[30] R. Castro, C. Kalish, R. Nowak, R. Qian, T. Rogers, and X. Zhu, ?Human active learning,? NIPS, 2008. 8
[31] P. V. J. Palmer and M. Pavel, ?The psychophysics of visual search.,? Vision Research., vol. 40, 2000. 8
[32] J. M. Wolfe in Attention. (H. E. S. in Attention (ed. Pashler, H.) 13-74 (Psychology Press, ed.), 1998. 8
[33] K. Nakayama and P. Martini, ?Situating visual search,? Vision Research, vol. 51, pp. 1526?1537, 2011. 8
[34] M. P. Eckstein, ?Visual search: A retrospective.,? Journal of Vision, vol. 11, no. 5, 2011. 8
[35] A. Borji and L. Itti, ?State-of-the-art in modeling visual attention.,? IEEE PAMI, 2012. 8
[36] A. C. Kamil, J. R. Krebs, and H. R. Pulliam, Foraging Behavior. (ed) 1987 (New York: Plenum). 8
9
| 4952 |@word trial:32 exploitation:5 version:1 eliminating:1 polynomial:7 seems:1 approved:1 nd:6 middle:1 human2:4 open:1 termination:1 calculus:1 zilinskas:1 seek:1 tried:1 mockus:1 covariance:1 irb:1 pavel:1 pick:2 thereby:1 series:1 score:6 loc:1 genetic:2 tuned:1 interestingly:3 freitas:1 blank:2 yet:1 must:1 najemnik:3 john:1 botton:1 numerical:1 informative:3 happen:2 shape:3 predetermined:1 designed:2 plot:2 progressively:5 update:2 v:11 fminunc:3 half:1 discovering:1 yr:4 selected:6 guess:5 implying:2 plane:1 prize:2 record:1 math:4 lending:1 location:39 preference:1 revisited:1 along:2 direct:12 become:4 descendant:1 mui:7 fixation:1 behavioral:3 pairwise:2 inter:1 expected:2 behavior:11 abscissa:2 themselves:1 multi:1 inspired:1 relying:1 automatically:1 actual:11 window:1 ryzhov:1 clicked:4 provided:1 discover:1 underlying:3 estimating:2 panel:2 lowest:1 easiest:2 what:2 cm:1 q2:1 developed:1 informed:1 unified:2 finding:4 extremum:1 nj:1 every:1 borji:3 concave:1 demonstrates:1 hit:14 classifier:1 control:2 glob:1 before:3 service:1 engineering:3 local:6 treat:1 sd:1 understood:1 mead:2 laurent:1 interpolation:23 pami:1 might:6 chose:4 twice:1 black:1 studied:3 resembles:1 dynamically:1 signed:6 suggests:1 plus:1 palmer:1 range:4 adoption:1 averaged:3 lion3:1 practical:1 unique:1 testing:5 lost:2 block:1 regret:5 cogn:1 procedure:7 mei:10 powell:1 area:1 gibson:1 drug:1 thought:1 mult:4 matching:3 significantly:9 pre:1 road:1 confidence:1 refers:1 word:1 suggest:5 spite:1 get:2 cannot:1 close:5 selection:12 ga:5 context:1 collapsed:1 pashler:1 optimize:1 weighed:1 compensated:1 center:3 williams:2 economics:1 starting:2 attention:3 unravel:1 convex:1 focused:1 sigir:1 qian:1 population:2 searching:1 swarm:2 updated:2 plenum:1 target:1 today:1 construction:1 heavily:1 user:2 goldberg:1 designing:1 hypothesis:2 agreement:8 pa:1 trend:1 assure:1 expensive:2 wolfe:1 utilized:2 updating:1 std:33 predicts:1 bottom:2 capture:3 calculate:1 region:7 went:1 movement:3 highest:4 disease:1 principled:1 complexity:1 reward:1 asked:7 renormalized:1 terminating:1 imum:1 ali:1 purely:1 f2:2 fminbnd:7 efficiency:1 translated:1 various:1 train:8 fast:1 query:1 artificial:1 hyper:1 choosing:2 outside:1 multipeak:1 quite:1 larger:1 statistic:4 gp:160 highlighted:1 think:1 noisy:1 kalish:3 sequence:2 differentiable:3 advantage:1 propose:1 ment:1 interaction:1 aro:1 fmax:3 gen:2 achieve:1 dirac:2 competition:1 los:2 exploiting:1 impaired:1 p:4 extending:1 eberhart:1 perfect:1 object:1 spent:1 depending:1 help:1 develop:1 measured:1 sa:2 strong:1 predicted:1 resemble:2 come:1 f15:2 met:1 quantify:1 direction:1 saved:1 correct:1 exploration:5 human:153 settle:1 rogers:2 education:1 explains:4 icnn:1 investigation:1 opt:8 exploring:1 mm:3 around:2 credit:1 intentional:1 normal:2 considered:2 exp:4 claypool:1 cognition:2 predict:3 mapping:1 major:3 continuum:1 institutional:1 purpose:3 tiesis:1 f7:2 integrates:1 proc:3 estimation:2 maker:1 minimization:1 cora:1 mit:1 sensor:1 gaussian:14 always:1 aim:1 finkel:1 deg3:16 focus:2 longest:1 improvement:1 pdfs:2 indicates:1 experimenting:1 lasted:1 f16:1 contrast:3 political:1 cg:2 baseline:1 sense:1 tech:1 helpful:1 stopping:2 entire:2 explanatory:1 hidden:1 relation:1 quasi:1 overall:3 denoted:2 augment:1 lucas:1 development:1 art:2 psychophysics:1 field:2 construct:1 equal:1 once:1 familiarize:1 biology:1 argmaxt:1 hardest:1 look:1 nearly:1 jones:1 icml:1 simplex:2 others:2 stimulus:11 intelligent:2 hint:1 few:4 employ:3 randomly:7 usc:4 replacement:1 investigate:3 possibility:2 kinematic:1 evaluation:2 alignment:1 kirkpatrick:1 extreme:1 behind:1 implication:1 accurate:1 closer:6 nowak:1 conduct:2 theoretical:5 fitted:4 minfunc:7 instance:1 uncertain:2 modeling:5 cover:1 w911nf:2 subtending:1 cost:2 deviation:9 uniform:4 conducted:2 optimally:1 answer:2 foraging:2 synthetic:1 gd:3 chooses:2 st:5 geisler:4 international:1 szego:1 probabilistic:1 systematic:3 lee:1 together:1 quickly:1 again:3 recorded:1 opposed:3 choose:8 f22:2 gale:1 cox:1 worse:2 corner:1 cognitive:1 derivative:1 itti:3 return:1 actively:1 account:3 suggesting:2 koopman:1 de:1 student:2 includes:1 coefficient:1 dixon:1 explicitly:1 mv:10 tion:2 try:8 observer:1 extrapolation:15 view:1 non0:1 later:1 red:4 start:6 reached:1 participant:6 complicated:4 pso:10 linked:1 capability:1 contribution:1 minimize:3 square:1 f12:2 accuracy:8 became:1 variance:5 characteristic:1 correspond:1 landscape:1 confounds:1 decipher:1 bayesian:13 accurately:3 basically:1 worth:1 kennedy:1 cybernetics:2 history:1 explain:4 reach:4 tended:3 ed:5 lengthy:1 against:1 pp:8 clearest:1 psi:1 sampled:2 sub0:1 pilot:1 intensively:1 distractors:1 knowledge:2 sophisticated:3 carefully:1 brochu:1 elder:1 focusing:1 higher:11 follow:4 response:3 rand:4 though:2 box:1 shrink:1 furthermore:1 spiky:1 until:2 perhaps:1 resemblance:1 indicated:1 believe:1 name:1 hypothesized:1 normalized:6 true:3 managed:1 ccf:1 evolution:1 hence:1 shuffled:7 moore:1 semantic:1 during:9 width:1 covering:1 prohibit:1 mpi:11 percentile:1 criterion:7 stone:1 pdf:3 invisible:1 performs:1 passive:1 functional:3 performed:2 extend:1 elevated:1 slight:1 relating:1 krebs:1 significant:5 queried:2 smoothness:1 rd:1 consistency:1 grid:2 session:2 pm:3 particle:2 had:8 dot:1 lowered:1 similarity:1 surface:1 carroll:1 etc:5 base:2 wilcoxon:1 curvature:1 closest:2 posterior:2 showed:6 griffith:2 leftward:1 optimizing:2 driven:1 rewarded:1 apart:1 randy:2 binary:1 continue:1 life:1 ous:1 devise:1 exploited:1 seen:1 morgan:1 additional:2 schneider:1 determine:1 shortest:1 converge:1 signal:1 smooth:1 faster:3 academic:1 offer:3 cross:1 divided:1 devised:1 equally:2 controlled:2 basic:5 regression:1 vision:6 histogram:3 sometimes:2 kernel:3 represent:1 robotics:1 addition:1 participated:2 separately:1 addressed:2 annealing:1 harrison:1 aged:2 median:3 biased:1 subject:62 tend:1 elegant:1 call:15 near:2 counting:1 noting:1 revealed:1 ideal:1 enough:1 easy:3 presence:1 variety:2 fit:2 psychology:3 click:80 opposite:1 imperfect:1 idea:2 regarding:3 reduce:1 decline:1 angeles:2 whether:1 motivated:1 ject:1 effort:1 penalty:1 retrospective:1 locating:1 passing:1 york:1 generaliza:1 tol:1 dramatically:1 detailed:1 listed:1 maybe:2 repeating:1 locally:1 ph:3 reluctantly:1 reduced:1 generate:2 outperform:1 exist:1 nsf:1 exploitative:1 tutorial:1 happened:1 neuroscience:2 estimated:1 blue:2 promise:1 vol:10 salient:1 threshold:1 monitor:1 drawn:1 fminsearch:8 intersected:1 f8:1 utilize:1 kept:1 rectangle:2 compete:1 run:6 uncertainty:10 striking:1 family:3 reasonable:2 decide:2 patch:1 draw:1 decision:4 bound:2 followed:2 quadratic:2 ahead:1 placement:1 locatelli:1 f24:2 declared:1 aspect:1 speed:1 simulate:1 min:2 argument:1 vecchi:1 attempting:1 performing:2 relatively:2 department:2 developing:2 according:3 combination:1 conjugate:1 across:3 smaller:1 slightly:1 son:1 nomial:1 making:2 happens:1 castro:2 gradually:1 koh:1 previously:1 remains:1 mechanism:5 fail:3 wrt:1 know:3 end:4 operation:1 probe:1 observe:2 hierarchical:1 away:2 gelatt:1 save:1 existence:1 original:1 kushner:1 completed:3 newton:1 unifying:2 exploit:3 seeking:1 objective:2 question:1 strategy:15 costly:1 cmmi:1 gradient:8 win:1 distance:21 simulated:1 sci:1 majority:1 manifold:1 extent:1 tutoring:1 toward:3 reason:3 schwefel:1 induction:1 relationship:1 illustration:1 providing:1 balance:4 ratio:1 difficult:2 robert:1 taxonomy:1 rise:4 design:4 unknown:2 perform:6 twenty:1 upper:2 vertical:1 observation:2 fedorov:1 benchmark:1 descent:2 displayed:1 beat:1 locate:1 varied:1 rank:6 smoothed:1 arbitrary:1 prompt:1 ordinate:2 pair:1 eckstein:1 specified:2 learned:3 established:1 pop:2 nip:3 address:1 able:1 bar:2 below:1 pattern:3 challenge:1 including:3 max:19 reliable:1 tance:1 belief:2 power:2 memory:2 business:1 natural:3 difficulty:1 boyan:1 retreival:1 zhu:2 sults:1 eye:3 axis:1 satisficing:2 func:3 deviate:1 review:1 literature:1 understanding:1 prior:2 text:1 relative:1 expect:1 generation:2 rameters:1 interesting:1 age:3 degree:5 consistent:1 principle:1 previ:1 systematically:1 seated:1 martini:1 balancing:1 course:1 supported:1 rasmussen:1 english:1 dis:1 bias:3 allow:1 understand:2 guide:1 explaining:1 fifth:1 absolute:1 tolerance:2 curve:3 dimension:1 world:3 ending:1 computes:1 instructed:2 reinforcement:1 human1:4 ple:1 far:2 employing:1 cope:1 gabor:1 emphasize:1 keep:1 confirm:1 deg:2 global:9 active:31 nelder:2 xi:4 search:58 continuous:4 why:1 table:4 promising:4 learn:4 terminate:2 nature:2 diehl:1 nakayama:1 improving:1 argmaxx:1 complex:3 kamil:1 domain:1 garnett:1 did:9 main:1 csd:1 arrow:1 noise:2 osborne:13 fair:2 allowed:2 x1:1 fig:34 screen:5 board:1 elaborate:1 gorithms:1 wiley:1 shrinking:1 fails:1 meyer:1 guiding:1 exponential:3 candidate:1 clicking:1 tied:1 perceptual:1 formula:1 remained:1 down:1 discarding:1 specific:1 inset:4 showing:1 sinc:1 evidence:1 disclosed:1 undergraduate:3 sequential:4 adding:1 gained:1 supplement:1 magnitude:2 f20:4 margin:1 gap:1 depicted:1 led:1 explore:2 lbfgs:1 visual:6 hitting:1 pls:1 bo:17 recommendation:1 holland:1 ficult:1 chance:1 lewis:1 acm:1 goal:2 formulated:1 rbf:1 towards:2 twofold:1 man:2 hard:1 change:2 except:1 corrected:1 uniformly:2 miss:2 total:2 engaged:1 experimental:3 attempted:2 ucb:21 indicating:1 select:1 people:1 searched:10 support:2 evaluate:4 princeton:1 tested:2 |
4,368 | 4,953 | Latent Structured Active Learning
Wenjie Luo
TTI Chicago
[email protected]
Alexander G. Schwing
ETH Zurich
[email protected]
Raquel Urtasun
TTI Chicago
[email protected]
Abstract
In this paper we present active learning algorithms in the context of structured
prediction problems. To reduce the amount of labeling necessary to learn good
models, our algorithms operate with weakly labeled data and we query additional
examples based on entropies of local marginals, which are a good surrogate for
uncertainty. We demonstrate the effectiveness of our approach in the task of 3D
layout prediction from single images, and show that good models are learned when
labeling only a handful of random variables. In particular, the same performance
as using the full training set can be obtained while only labeling ?10% of the
random variables.
1
Introduction
Most real-world applications are structured, i.e., they are composed of multiple random variables
which are related. For example, in natural language processing, we might be interested in parsing
sentences syntactically. In computer vision, we might want to predict the depth of each pixel, or its
semantic category. In computational biology, given a sequence of proteins (e.g., lethal and edema
factors, protective antigen) we might want to predict the 3D docking of the anthrax toxin. While individual variables could be considered independently, it has been demonstrated that taking relations
into account improves prediction performance significantly.
Prediction in structured models is typically performed by maximizing a scoring function over the
space of all possible outcomes, an NP-hard task for most graphical models. Traditional learning algorithms for structured problems tackle the supervised setting [16, 33, 11], where input-output pairs
are given and each structured output is fully labeled. Obtaining fully labeled examples might, however, be very cumbersome as structured models often involve a large number of random variables,
e.g., in semantic segmentation, we have to label several million random variables, one for each pixel.
Furthermore, obtaining ground truth is sometimes difficult as it potentially requires accessing extra
sensors, e.g., laser scanners in the case of stereo. This is even more extreme in the medical domain,
where obtaining extra labels is sometimes not even possible, e.g., when tests are not available. Thus,
reducing the amount of labeled examples required for learning the scoring function is key for the
success of structured prediction in real-world applications.
The active learning setting is particularly beneficial as it has the potential to considerably reduce
the amount of supervision required to learn a good model, by querying only the most informative
examples. In the structured case, active learning can be generalized to query only subparts of the
graph for each example, reducing the amount of necessary labeling even further.
While a variety of active learning approaches exists for the case of classification and regression, the
structured case has been less popular, perhaps because of its intrinsic computational difficulties as we
have to deal with exponentially sized output spaces. Existing approaches typically consider the case
where exact inference is possible [7], label the full output space [7, 22], or rely on computationally
expensive processes that require inference for each possible outcome of each random variable [34].
The latter is computationally infeasible for most graphical models.
1
In contrast, in this paper we present efficient approximate approaches for general graphical models
where exact inference is intractable. In particular, we propose to select which parts to label based
on the entropy of the local marginal distributions. Our active learning algorithms exploit recently
developed weakly supervised methods for structured prediction [28], showing that we can benefit
from unlabeled examples and exploit the marginal distributions computed during learning. Furthermore, computation is re-used at each active learning iteration, improving efficiency significantly.
We demonstrate the effectiveness of our approach in the context of 3D room layout estimation from
single images, and show that state-of-the-art results are achieved by employing much fewer manual interactions (i.e., labels). In particular, we match the performance of the state-of-the-art in this
task [27] while only labeling ?10% of the random variables.
In the remainder of the paper we first review learning methods for structured prediction. We then
propose our active learning algorithms, and show our experimental evaluation followed by a discussion on related work and conclusions.
2
Maximum Likelihood Structure Prediction
We begin by reviewing structured prediction approaches that employ both fully labeled training sets
as well as those that handle latent variables. Of particular interest to us are probabilistic formulations
since we employ entropies of local probability distributions as our criteria for deciding which parts
of the graph to label during each active learning step.
Let x ? X be the input space (e.g., an image or a sentence), and let s ? S be the structured labeled
space that we are interested in predicting (e.g., an image segmentation or a parse tree). We define
? : X ? S ? RF to be a mapping from input and label space to an F -dimensional feature space.
Here we consider log-linear distributions pw (s|x) describing the probability over a structured label
space S given an object x ? X as
pw (s|x) ? exp w> ?(x, s) .
(1)
During learning, we are interested in estimating the parameters w ? RF of the log-linear distribution
such that the score w> ?(x, s) is high if s ? S is a ?good? label for x ? X .
2.1
Supervised Setting
To define ?good,? in the supervised setting we are given a training set D = {(xi , si )N
i=1 } containing
N pairs, each composed of an input x ? X and some fully labeled data s ? S. In addition, we are
often able to compare the fitness of an estimate s? ? S for a training sample (x, s) ? D via what we
refer to as the task-loss function `(x,s) (?
s). Its purpose is very much like enforcing a distance between
the hyperplane defined by the parameters and the respective sample when considering the popular
max-margin setting. We incorporate this loss function into the learning process by considering the
loss-augmented distribution
p(x,s) (s|w) ? exp(w> ?(x, s) + `(x,y) (s)).
(2)
Intuitively it places more probability mass on those parts of the output space S that have a high loss,
forcing the model to adapt to a more difficult setting than the one encountered at inference, where
the loss is not present.
Maximum likelihood learning aims at finding model parameters w which assign highest probability
to the training set D. Assuming the data to be independent
Q and identically distributed (i.i.d.), ourp
goal is to minimize the negative log-posterior ? ln[p(w) (x,s)?D p(x,s) (s|w)] with p(w) ? e?kwkp
being a prior on the model parameters. The cost function is therefore given by
!
!
X
X
w> ?(x, s?) + `(x,y) (?
s)
C
p
>
kwkp +
ln
exp
? w ?(x, s) ,
(3)
p
(x,y)?D
s??S
where we have included a parameter to yield a soft-max function. Although being a convex function, the difficulty arises from the sum over exponentially many label configurations s?.
Different algorithms have been proposed to solve this task. While efficient computation over treestructured models is required for convergence guarantees [16], approximations were suggested to
achieve convergence even when working with loopy models [11].
2
2.2
Dealing with Latent Variables
In the weakly supervised setting, we are given a training set D = {(xi , yi )N
i=1 } containing N pairs,
each composed of an input x ? X and some partially labeled data y ? Y ? S. For every training
pair, the label space S = Y ?H is divided into two non-intersecting subspaces Y and H. We refer to
the missing information h ? H as hidden or latent. As before, we incorporate a task-loss function,
and define the loss-augmented likelihood of a prediction y? ? Y when observing the pair (x, y) as
X
X
?
p(x,y) (?
y |w) ?
p(x,y) (?
y , h|w)
=
p(x,y) (?
s|w),
(4)
?
h?H
?
h?H
with p(x,y) (?
s|w) defined as in Eq. 2. The minimization of the negative log-posterior results in the
difference of two convex terms as follows
?
X
X
C
? ln
kwkpp +
exp
p
(x,y)?D
s
??S
w> ?(x, s?) + `(x,y) (?
s)
!
? ln
X
?
h?H
exp
!?
? + `c (y, h)
?
w> ?(x, y, h)
(x,y)
?,
with the first two terms being the sum of the log-prior and the logarithm of the partition function.
For generality we allow different task-loss `, `c while noting that `c ? 0 in our experiments.
Besides the previously outlined difficulty of exponentially sized product spaces, the cost function is
no longer convex. Hence we generally employ expectation maximization (EM) or concave-convex
procedure (CCCP) [37] type of approaches, i.e., we linearize the non-convex part at the current
iterate before taking a step in the direction of the gradient of a convex function. More specifically,
we follow Schwing et al. [28] and upper-bound the concave part via a minimization over a set of
dual variables subsequently referred to as q(x,y) (h):
X
X
C
kwkpp +
ln
exp
p
(x,y)
s
??S
w> ?(x, s?)+`(x,y) (?
s)
!
!
c
?
?
?H(q(x,y) )?Eq(x,y) [w ?(x, y, h)+` (x, y, h)] .
>
To deal with the exponential complexity we notice that
of the feature
P frequently the k-th element
P
vector decomposes into local terms, i.e., ?k (x, s) = i?Vk,x ?k,i (x, si ) + ??Ek,x ?k,? (x, s? ).
Vk,x represents the set indexing the unary potentials for the k-th feature of example (x, y). Similarly
Ek,x denotes the set of all high-order variable interaction sets ? in the k-th feature of example (x, y).
All variable indexes which are not observed are subsumed within the set H. Similarly all factors ?
that contain variable i are summarized within the set N (i).
We leverage the decomposition within the features to also approximate the entropy over the joint distribution q(x,y) (h) by local ones ranging over marginals. Furthermore, we approximate the marginal
polytope by the local polytope. We deal with the summation over the output space objects s? ? S in
the convex part in a similar manner. To this end we change to the dual space, employ the entropy
approximations and transform the resulting surrogate function back to the primal space where we
obtain Lagrange multipliers ? which enforce the marginalization constraints. Altogether we obtain
an approximate primal program having the following form:
min
d,?,w
s.t.
f1 (w, d, ?) + f2 (d) + f3 (d)
X
d(x,y),? (h? ) = d(x,y),i (hi ) ?(x, y), i ? H, ? ? N (i), hi ? Si
(5)
h? \hi
d(x,y),i , d(x,y),? ? ?
with ? denoting probability simplexes. We refer the reader to [28] for the specific forms of these
functions.
Following EM or CCCP, this program is optimized by alternatively minimizing w.r.t. the local beliefs
d to solve the latent variable prediction problem, and performing a gradient step w.r.t. the weights as
well as block-coordinate descent steps to update the Lagrange multipliers ?. The latter is equivalent
to solving a supervised conditional random field problem given the distribution over latent variables
inferred in the preceding latent variable prediction step.
We augment [28], and return not only the weights but also the local beliefs d which represent the
joint distribution q(x,y) (h), i.e., a distribution over the latent space only. We summarize this process
in Alg. 1. Note that only a local minimum is obtained as we are solving a non-convex problem.
3
Algorithm 1 latent structured prediction
Input: data D, initial weights w
repeat
repeat
//solve latent variable prediction problem
mind f2 + f3 s.t. ?(x, y) d(x,y) ? D(x,y)
until convergence
//message passing update
?(x, y), i ? S ?(x,y),i ? ??(x,y),i (f1 + f2 ) = 0
//gradient step with step size ?
w ? w ? ??w (f1 + f2 )
until convergence
Output: weights w, beliefs d
3
Active Learning
In the previous section, we defined the maximum likelihood estimators for learning in the supervised
and weakly supervised setting. We now derive our active learning approaches. In the active learning
L
setting, we assume a given training set DS = {(xi , yi )N
i=1 } containing NL pairs, each composed
by an input x ? X and some partially labeled data y ? Y ? S. As before, for every training pair,
we divide the label space S = Y ? H into two non-intersecting subspaces Y and H, and refer to
the missing information h ? H as hidden or latent. Additionally, we are given a set of unlabeled
u
examples DU = {(xi )N
i=1 }.
We are interested in answering the following question: which part of the graph for which example
should we labeled in order to learn the best model with the least amount of supervision? Towards
this goal, we derive iterative algorithms which select the random variables to be labeled based on
the local entropies. This is intuitive, as entropy is a surrogate for uncertainty and useful for the
considered application since the cost of labeling a random variable is independent of the selection.
Here, our algorithms iteratively query the labels of the random variables of highest uncertainty,
update the model parameters w and again ask for the next most uncertain set of variables.
Towards this goal, we need to compute the entropies of the marginal distributions over each latent
variable, as well as the entropy over each random variable of the unlabeled examples. This is in
general NP-hard, as we are interested in dealing with graphical models with general potentials and
connectivity. In this paper we derive two active learning algorithms, each with a different trade-off
between accuracy and computational complexity.
Separate active: Our first algorithm utilizes the labeled and weakly labeled examples to learn
at each iteration. Once the parameters are learned it performs inference over the unlabeled and
partially labeled examples to query for the next random variable to label. Thus, it requires a separate
inference step for each active learning iteration. As shown in our experiments, this can be done
efficiently using convex belief propagation [10, 26]. The corresponding algorithm is summarized in
Alg. 2.
Joint active: Our second active learning algorithm takes advantage of unlabeled examples during
learning and no extra effort is required to compute the most informative random variable. Note
that this contrasts active learning algorithms which typically do not exploit unlabeled data during
learning and require very expensive computations in order to select the next example or random
variable to be labeled. Let D1 = DS ? DU be the set of all training examples containing both
fully labeled, partially labeled and unlabeled examples. At each iteration we obtain Dt by querying
the label of a random variable not being labeled in Dt?1 . Thus, at each iteration, we learn using a
weakly supervised structured prediction task that solves
?
X
X
C
? ln
kwt kpp +
exp
p
t
(x,y)?D
s
??S
wt> ?(x, s?)+`(x,y) (?
s)
4
!
? ln
X
t
?
h?H
exp
!?
c
?
?
wt> ?(x, y, h)+`
(x,y) (y, h)
?,
Algorithm 2 Separate active
Algorithm 3 Joint active
Input: data DS , DU , initial weights w
repeat
(w, d) ? Alg. 1(DS ? DU , w)
i? ? arg maxi H(di )
DS ? DS ? {(xi? , yi? )}, DU ? DU \ xi?
until sufficiently certain
Output: weights w
Input: data DS , DU , initial weights w
repeat
(w, dS ) ? Alg. 1(DS , w)
dU ? Inference(DU )
i? ? arg maxi H(di )
DS ? DS ? {(xi? , yi? )}, DU ? DU \ xi?
until sufficiently certain
Output: weights w
with wt the weights for the t-th iteration. We resort to the approximated problem given in Eq. 5 to
solve this optimization task. The entropies are readily computable in close form, as the local beliefs
d are computed during learning. Thus, no extra inference step is necessary. The local entropies are
P|Hi |
then given by H(di ) = ? hi =1
di (hi ) log di (hi ), and we query the variable that has the highest
entropy, i.e., the highest uncertainty. Note that this computation is linear in the number of unlabeled
random variables and linear in the number of states. We summarize our approach in Alg. 3. Note
that this algorithm is more expensive than the previous one as learning employs the fully, weakly
and unlabeled examples. This is particularly the case when the pool of unlabeled examples is large.
However, as shown in our experimental evaluation, it can dramatically reduce the amount of labeling
required to learn a good model.
Batch mode: The two previously defined active learning approaches are computationally expensive as for each sequential active learning step, a new model has to be learned and inference has to
be performed over all latent variables. We also investigate batch algorithms which label k random
variables at each step of the algorithm. Towards this goal, we simply label the top k most uncertain variables. Note that this is an approximation of what the sequential algorithm will do, as the
estimates of the parameters and the entropies are not updated when selecting the i-th variable.
Re-using computation: Warm starting the learning algorithm after each active learning query is
important in order to reduce the number of iterations required for convergence. Since (almost) the
same samples are involved at each step, we can extract a lot of information from previous iterations.
To this end we re-use both the weights w as well as the messages ? and beliefs. More specifically,
for Alg. 2 we first perform inference on only newly selected examples to update the corresponding
messages ?. Only afterwards and together with Lagrange multipliers from the other training images
and the current weights, we perform the next iteration and another active step. On the other hand,
since we take advantage of all the unlabeled data during the joint active learning algorithm (Alg. 3),
we already know the Lagrange multipliers ? for every image. Without any further updates we
directly start a new active step. In our experimental evaluation we show that this choice results in
dramatic speed ups when compared to randomly initializing the weights and messages during every
active learning iteration. Note that the joint approach (Alg. 3) requires a larger number of iterations
to converge as it employs large amounts of unlabeled data. After a few iterations, convergence
for the following active learning steps improves significantly requiring about as much time as the
separate approach (Alg. 2) does.
4
Experimental Evaluation
We demonstrate the performance of our algorithms on the task of predicting the 3D layout of rooms
from a single image. Existing approaches formulate the task as a structured prediction problem focusing on estimating the 3D box which best describes the layout. Taking advantage of the Manhattan world assumption (i.e., the existence of three dominant vanishing points which are orthonormal),
and given the vanishing points, the problem can be formulated as inference in a pairwise graphical
model composed of four random variables [27]. As shown in Fig. 1, these variables represent the
angles encoding the rays that originate from the respective vanishing points. Following existing
approaches [12, 17], we employ F = 55 features based on geometric context (GC) [13] and orientation maps (OM) [18] as image cues. Our features ? count for each face in the cuboid (given
a particular configuration of the layout) the number of pixels with a certain label for OM and the
probability that such label exists for GC and the task-loss ` denotes the pixel-wise prediction error.
5
s1
s2
s1
s3
s2
s3
s4
s4
Figure 1: Parameterization and factor graph for the 3D layout prediction task.
separate
joint
0.24
best
0.2
0.18
0.16
0
separate
joint
0.24
best
0.22
pixelwise test error
pixelwise test error
0.22
random
40
60
number of queries
(a) k = 1
80
0.2
0.18
0
separate
joint
0.24
best
0.22
0.16
20
random
0.2
0.18
0.16
20
40
60
number of queries
80
0
(b) k = 4
random
separate
joint
best
0.22
pixelwise test error
random
pixelwise test error
0.24
0.2
0.18
0.16
20
40
60
number of queries
(c) k = 8
80
0
20
40
60
number of queries
80
(d) k = 12
Figure 2: Test set error as a function of the number of random variables labeled, when using joint vs
separate active learning. The different plots reflect scenarios where the top k random variables are
labeled at each iteration (i.e., batch setting). From left to right k = 1, 4, 8 and 12.
Performance is measured as the percentage of pixels that have been correctly labeled as, left-wall,
right-wall, front-wall, ceiling or floor. Unless otherwise stated all experiments are performed by
averaging over 20 runs of the algorithm, where the initial seed of 10 fully labeled images is selected
at random.
Active learning: We begin our experimentation by comparing the two proposed active learning
algorithms, i.e., separate (Alg. 2) and joint (Alg. 3). As shown in Fig. 2(a), both active learning
algorithms achieve much lower test error than an algorithm that selects which variables to label
at random. Also, note that the joint algorithm takes advantage of unlabeled data and achieves good
performance after labeling only a few variables, improving significantly over the separate algorithm.
Batch active learning: Fig. 2 shows the performances of both active learning algorithms when
labeling a batch of k random variables before re-learning. Note that even with a batch of k = 12
random variables, our algorithms quickly outperform random selection, as illustrated in Fig. 2(d).
Image vs random variable: Instead of labeling a random variable at a time, we also experiment
with an algorithm that labels the four variables of an image at once. Note that this setting is equivalent to labeling four random variables per image. As shown in Fig. 3(a), labeling the full image
requires more labeling to achieve the same test error performance when compared to labeling random variables from possibly different examples.
Importance of : Fig. 3(b) and (c) show the performance of our active learning algorithms as a
function of . Note that this parameter is fairly important. In particular, when = 1, the entropy
of most random variables is too large to be discriminative. This is illustrated in Fig. 3(d) where we
observe a fairly uniform distribution over the states of a randomly chosen variable for = 1. Our
active learning algorithm thus prefers smaller values of . We hypothesize that this is due to the fact
that we have a small number of random variables each having a large number of states. Our initial
tests show that in other applications where the number of states is smaller (e.g., segmentation) larger
values of perform better. An automatic selection of is subject of our future research.
Complexity Separate vs. Joint: In Fig. 4(a) we illustrate the number of CCCP iterations as a
function of the number of queried examples for both active learning algorithms. We observe that
the joint algorithm requires more computation initially. But after the first few active steps, i.e., after
having converged to a good solution, its computation requirements reduce drastically. Here we use
= 0.01 for both algorithms.
6
0.2
0.18
0.24
?=1
?=0.1
0.16
?=1
20
40
60
number of queries
80
(a) Image vs variable
?=0.1
?=0.01
?=1
0.22
0.2
0.18
0.16
0
0.2
?=0.01
0.22
pixelwise test error
pixelwise test error
0.24
random: 1 var
separate: 1 var
joint: 1 var
0.2
0.18
20
40
60
number of queries
80
0
(b) separate
?=0.01
0.1
0.05
0.16
0
?=0.1
0.15
probability
random: 1 img
separate: 1 img
joint: 1 img
0.22
pixelwise test error
0.24
20
40
60
number of queries
80
(c) joint
0
0
20
40
state
60
(d) Marginal distribution
Figure 3: Test set error as a function of the number of random variables labeled ((a)-(c)). Marginal
distribution is illustrated in (d) for different .
4
100
8
joint ?=0.01
separate ?=0.01
80
reuse
not reuse
5000
4000
time[s]
5
40
4
3
3000
2000
2
20
1000
1
0
0
6000
reuse
not reuse
6
60
time[s]
# CCCP iteration
x 10
7
20
40
60
number of queries
(a)
80
0
0
20
40
# active finished
(b)
60
80
0
0
20
40
# active finished
60
80
(c)
Figure 4: Number of CCCP iterations as a function of the amount of queried variables in (a) and
time after specified number of active iterations in (b) (joint) and (c) (separate).
Reusing computation: Fig. 4(b) and (c) show the number of finished active learning iterations as a
function of time for the joint and separate algorithm respectively. Note that by reusing computation,
a much larger number of active learning iterations finishes when given a specific time budget.
5
Related Work
Active learning approaches consider two different scenarios. In stream-based methods [5], samples
are considered successively and a decision is made to discard or eventually pick the currently investigated sample. In contrast, pool-based methods [20] have access to a large set of unlabeled data.
Clearly our proposed approach has a pool-based flavor. Over the years many different strategies
have been proposed in the context of active learning algorithms to decide which example to label
next. While we follow the uncertainty sampling scheme [20, 19] using an entropy measure, sampling schemes based on expected model change [29] have also been proposed. Other alternatives are
expected error reduction [24], variance reduction [4, 6], least-confident measure [7] or margin-based
measures [25].
An alternative way to classify active learning algorithms is related to the information revealed after
querying for a label. In the multi-armed bandit model [1, 2] the algorithm chooses an action/sample
and observes the utility of only that action. Alternatively when learning with expert advice, utilities
for all possible actions are revealed [3]. Between both of the aforementioned extremes sits the coactive learning setting [30] where a subset of rewards for all possible actions is revealed by the user.
Our approach resembles the multi-armed bandit setting since we only get to know the result of the
newly queried sample.
Active learning approaches have been proposed in the context of Neural Networks [6], Support Vector Machines [32], Gaussian processes [14], CRFs [7] and structured max-margin formulations [22].
Contrasting many of the previously proposed approaches we consider active learning as an extension of a latent structured prediction setting, i.e., we extend the double-loop algorithm by yet another
layer. Importantly, our active learning algorithm follows the recent ideas to unify CRFs and structured SVMs. It employs convex approximations and is amenable to general graphical models with
arbitrary topology and energy functions.
7
The first application of active learning in computer vision was developed by Kapoor et al. [14] to
perform object recognition with minimal supervision. In the context of structured models [8] proposed to use conditional entropies to decide which image to label next in a segmentation task. In [36]
the set of frames to label in a video sequence is selected based on the cost of labeling each frame and
the cost of correcting errors. Unlike our approach, [8, 36] labeled full images (not sets of random
variables). As shown in our experiments this requires more manual interactions than our approach.
GrabCut [23] popularized the use of ?active learning? for figure ground segmentation, where the
question of what to labeled next is answered by a human via an interactive segmentation system.
Siddiquie et al. [31] propose to label that variable which most reduces the entropy of the entire ?system,? i.e., all the data, by taking into account correlations between variables using a Bethe entropy
approximation. In [15], the next region to be labeled is selected based on a surrogate of uncertainty
(i.e., min marginals) which is computed efficiently via dynamic graph cuts. This, however, is only
suitable for problems that can be solved via graph cuts (e.g., binary labeling problems with sub modular energies). In contrast, in this paper we are interested in the general setting of arbitrary energies
and connectivities. Entropy was used as an active learning criteria for tree-structured models [21],
where marginal probabilities can be computed exactly.
In the context of video segmentation, Fathi et al. [9] frame active learning as a semi-supervised
learning problem over a graph. They utilized the entropy as a metric for selecting which superpixel to label within their graph regularization approach. In the context of holistic approaches,
Vijayanarasimhan et al. [35] investigated the problem of which task to label. Towards this goal they
derived a multi-label multiple-instance approach, which takes into account the task effort (i.e., the
expected time to perform each labeling). Vezhnevets et al. [34] resort to the expected change as the
criteria to select which parts to label in the graphical model. Unfortunately, computing this measure is computationally expensive, and their approach is only feasible for graphical models where
inference can be solved via graph cuts.
6
Conclusions
We have proposed active learning algorithms in the context of structure models which utilized local
entropies in order to decide which subset of the output space for which example to label. We have
demonstrated the effectiveness of our approach in the problem of 3D room layout prediction given a
single image, and we showed that state-of-the-art performance can be obtained while only employing
?10% of the labelings. We will release the source code upon acceptance as well as scripts to
reproduce all experiments in the paper. In the future, we plan to apply our algorithms in the context
of holistic models in order to investigate which tasks are more informative for visual parsing.
References
[1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine
Learning, 2002.
[2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The non-stochastic multi-armed bandit problem.
SIAM J. on Computing, 2002.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning and Games. Cambridge University Press, 2006.
[4] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
1994.
[5] D. Cohn, L. Atlas, R. Ladner, M. El-Sharkawi, R. Marks II, M. Aggoune, and D. Park. Training connectionist networks with queries and selective sampling. In Proc. NIPS, 1990.
[6] D. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. J. of Artificial
Intelligence Research, 1996.
[7] A. Culotta and A. McCallum. Reducing labeling effort for structured prediction tasks. In Proc. AAAI,
2005.
[8] A. Farhangfar, R. Greiner, and C. Szepesvari. Learning to Segment from a Few Well-Selected Training
Images. In Proc. ICML, 2009.
[9] A. Fathi, M. F. Balcan, X. Ren, and J. M. Rehg. Combining Self Training and Active Learning for Video
Segmentation. In Proc. BMVC, 2011.
8
[10] T. Hazan and A. Shashua. Norm-Product Belief Propagation: Primal-Dual Message-Passing for LPRelaxation and Approximate-Inference. Trans. Information Theory, 2010.
[11] T. Hazan and R. Urtasun. A Primal-Dual Message-Passing Algorithm for Approximated Large Scale
Structured Prediction. In Proc. NIPS, 2010.
[12] V. Hedau, D. Hoiem, and D. A. Forsyth. Recovering the Spatial Layout of Cluttered Rooms . In Proc.
ICCV, 2009.
[13] D. Hoiem, A. A. Efros, and M. Hebert. Recovering Surface Layout from an Image. IJCV, 2007.
[14] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Active Learning with Gaussian Processes for Object
Categorization . In Proc. ICCV, 2007.
[15] P. Kohli and P. Torr. Measuring Uncertainty in Graph Cut Solutions -Efficiently Computing Min-marginal
Energies using Dynamic Graph Cuts. In Proc. ECCV, 2006.
[16] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In Proc. ICML, 2001.
[17] D. C. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating Spatial Layout of Rooms using Volumetric
Reasoning about Objects and Surfaces. In Proc. NIPS, 2010.
[18] D. C. Lee, M. Hebert, and T. Kanade. Geometric Reasoning for Single Image Structure Recovery. In
Proc. CVPR, 2009.
[19] D. Lewis and J. Catlett. Heterogeneous uncertainty sampling for supervised learning. In Proc. ICML,
1994.
[20] D. Lewis and W. Gale. A sequential algorithm for training text classifiers. In Proc. Research and Development in Info. Retrieval, 1994.
[21] T. Mensink, J. Verbeek, and G. Csurka. Learning Structured Prediction Models for Interactive Image
Labeling. In Proc. CVPR, 2011.
[22] D. Roth and K. Small. Margin-based Active Learning for Structured Output Spaces. In Proc. ECML,
2006.
[23] C. Rother, V. Kolmogorov, and A. Blake. GrabCut Interactive Foreground Extraction using Iterated
Graph Cuts. In Proc. SIGGRAPH, 2004.
[24] N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of error reduction.
In Proc. ICML, 2001.
[25] T. Scheffer, C. Decomain, and S. Wrobel. Active hidden Markov models for information extraction. In
Proc. Int?l Conf. Advances in Intelligent Data Analysis, 2001.
[26] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed Message Passing for Large Scale
Graphical Models. In Proc. CVPR, 2011.
[27] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efficient Structured Prediction for 3D Indoor
Scene Understanding. In Proc. CVPR, 2012.
[28] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efficient Structured Prediction with Latent
Variables for General Graphical Models. In Proc. ICML, 2012.
[29] B. Settles, M. Craven, and S. Ray. Multiple-instance active learning. In Proc. NIPS, 2008.
[30] P. Shivaswamy and T. Joachims. Online Structured Prediction via Coactive Learning. In Proc. ICML,
2012.
[31] B. Siddiquie and A. Gupta. Beyond Active Noun Tagging: Modeling Contextual Interactions for MultiClass Active Learning. In Proc. CVPR, 2010.
[32] S. Tong and D. Koller. Support vector machine active learning with applications to text classification.
JMLR, 2001.
[33] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large Margin Methods for Structured and
Interdependent Output Variables. JMLR, 2005.
[34] A. Vezhnevets, V. Ferrari, and J. M. Buhmann. Active Learning for Semantic Segmentation with Expected
Change. In Proc. CVPR, 2012.
[35] S. Vijayanarasimhan and K. Grauman. Cost-Sensitive Active Visual Category Learning. IJCV, 2010.
[36] S. Vijayanarasimhan and K. Grauman. Active Frame Selection for Label Propagation in Videos. In Proc.
ECCV, 2012.
[37] A. L. Yuille and A. Rangarajan. The Concave-Convex Procedure. Neural Computation, 2003.
9
| 4953 |@word kohli:1 pw:2 norm:1 anthrax:1 decomposition:1 pick:1 dramatic:1 edema:1 reduction:3 initial:5 configuration:2 score:1 selecting:2 hoiem:2 denoting:1 existing:3 coactive:2 current:2 comparing:1 contextual:1 luo:2 si:3 yet:1 parsing:2 readily:1 chicago:2 partition:1 informative:3 hofmann:1 hypothesize:1 plot:1 atlas:2 update:5 v:4 cue:1 fewer:1 selected:5 intelligence:1 parameterization:1 mccallum:3 vanishing:3 sits:1 ijcv:2 ray:2 manner:1 pairwise:1 tagging:1 expected:5 frequently:1 multi:5 kpp:1 armed:4 considering:2 begin:2 estimating:3 mass:1 what:3 developed:2 contrasting:1 finding:1 guarantee:1 every:4 concave:3 tackle:1 interactive:3 exactly:1 grauman:3 wenjie:2 classifier:1 medical:1 segmenting:1 before:4 local:13 encoding:1 lugosi:1 might:4 resembles:1 antigen:1 aschwing:1 block:1 procedure:2 eth:1 significantly:4 ups:1 protein:1 altun:1 get:1 unlabeled:14 selection:4 close:1 tsochantaridis:1 context:10 vijayanarasimhan:3 equivalent:2 map:1 demonstrated:2 missing:2 maximizing:1 crfs:2 layout:10 roth:1 starting:1 independently:1 convex:11 cluttered:1 formulate:1 unify:1 recovery:1 correcting:1 estimator:1 importantly:1 orthonormal:1 rehg:1 handle:1 ferrari:1 coordinate:1 updated:1 user:1 exact:2 superpixel:1 element:1 roy:1 expensive:5 particularly:2 approximated:2 recognition:1 utilized:2 cut:6 labeled:26 observed:1 initializing:1 solved:2 region:1 culotta:1 trade:1 highest:4 aggoune:1 observes:1 toxin:1 accessing:1 complexity:3 reward:1 dynamic:2 weakly:7 reviewing:1 solving:2 segment:1 yuille:1 upon:1 efficiency:1 f2:4 joint:21 siggraph:1 kolmogorov:1 laser:1 query:15 artificial:1 labeling:20 outcome:2 modular:1 larger:3 solve:4 cvpr:6 otherwise:1 fischer:1 transform:1 online:1 sequence:3 advantage:4 propose:3 interaction:4 product:2 remainder:1 loop:1 combining:1 kapoor:2 holistic:2 achieve:3 intuitive:1 convergence:6 double:1 requirement:1 darrell:1 rangarajan:1 categorization:1 tti:2 object:5 derive:3 linearize:1 illustrate:1 measured:1 eq:3 solves:1 recovering:2 direction:1 subsequently:1 stochastic:1 human:1 settle:1 require:2 assign:1 f1:3 generalization:1 wall:3 summation:1 extension:1 rurtasun:1 scanner:1 sufficiently:2 considered:3 blake:1 ground:2 deciding:1 exp:8 seed:1 mapping:1 predict:2 efros:1 achieves:1 catlett:1 purpose:1 estimation:2 proc:26 label:32 currently:1 sensitive:1 treestructured:1 minimization:2 clearly:1 sensor:1 gaussian:2 aim:1 derived:1 release:1 joachim:2 vk:2 likelihood:4 contrast:4 inference:13 shivaswamy:1 el:1 unary:1 typically:3 entire:1 initially:1 hidden:3 relation:1 bandit:4 koller:1 reproduce:1 labelings:1 interested:6 selects:1 selective:1 pixel:5 arg:2 classification:2 dual:4 orientation:1 augment:1 aforementioned:1 development:1 plan:1 noun:1 art:3 fairly:2 spatial:2 marginal:8 field:2 once:2 f3:2 having:3 extraction:2 sampling:5 biology:1 represents:1 park:1 icml:6 foreground:1 future:2 np:2 connectionist:1 intelligent:1 employ:8 few:4 randomly:2 composed:5 kwt:1 individual:1 fitness:1 subsumed:1 interest:1 message:7 acceptance:1 investigate:2 evaluation:4 extreme:2 nl:1 primal:4 amenable:1 necessary:3 respective:2 unless:1 tree:2 divide:1 logarithm:1 re:4 minimal:1 uncertain:2 instance:2 classify:1 soft:1 modeling:1 measuring:1 maximization:1 loopy:1 cost:6 subset:2 uniform:1 too:1 front:1 pixelwise:7 considerably:1 chooses:1 confident:1 siam:1 probabilistic:2 off:1 lee:2 pool:3 together:1 quickly:1 intersecting:2 connectivity:2 again:1 reflect:1 cesa:3 successively:1 containing:4 aaai:1 possibly:1 gale:1 conf:1 ek:2 resort:2 expert:1 return:1 reusing:2 account:3 protective:1 potential:3 summarized:2 int:1 forsyth:1 stream:1 performed:3 script:1 lot:1 csurka:1 observing:1 hazan:5 shashua:1 start:1 minimize:1 om:2 accuracy:1 variance:1 efficiently:3 yield:1 iterated:1 ren:1 converged:1 cumbersome:1 manual:2 volumetric:1 energy:4 simplexes:1 involved:1 di:5 newly:2 popular:2 ask:1 improves:2 segmentation:9 auer:2 back:1 focusing:1 dt:2 supervised:11 follow:2 bmvc:1 formulation:2 done:1 box:1 mensink:1 generality:1 furthermore:3 until:4 d:11 working:1 hand:1 correlation:1 parse:1 cohn:3 propagation:3 mode:1 perhaps:1 contain:1 multiplier:4 requiring:1 hence:1 regularization:1 iteratively:1 semantic:3 illustrated:3 deal:3 during:8 game:1 self:1 criterion:3 generalized:1 demonstrate:3 performs:1 syntactically:1 balcan:1 reasoning:2 image:21 ranging:1 wise:1 recently:1 vezhnevets:2 exponentially:3 million:1 extend:1 marginals:3 refer:4 cambridge:1 queried:3 automatic:1 outlined:1 similarly:2 language:1 access:1 supervision:3 longer:1 surface:2 dominant:1 posterior:2 recent:1 showed:1 inf:1 forcing:1 scenario:2 discard:1 certain:3 binary:1 success:1 yi:4 scoring:2 minimum:1 additional:1 preceding:1 floor:1 converge:1 grabcut:2 semi:1 ii:1 full:4 multiple:3 afterwards:1 reduces:1 match:1 adapt:1 retrieval:1 divided:1 cccp:5 prediction:27 verbeek:1 regression:1 heterogeneous:1 vision:2 expectation:1 metric:1 iteration:19 sometimes:2 represent:2 achieved:1 addition:1 want:2 source:1 extra:4 operate:1 unlike:1 subject:1 lafferty:1 effectiveness:3 jordan:1 noting:1 leverage:1 revealed:3 identically:1 variety:1 iterate:1 marginalization:1 finish:1 topology:1 reduce:5 idea:1 computable:1 multiclass:1 utility:2 reuse:4 effort:3 stereo:1 passing:4 prefers:1 action:4 dramatically:1 generally:1 useful:1 involve:1 amount:8 s4:2 svms:1 category:2 schapire:1 outperform:1 percentage:1 notice:1 s3:2 correctly:1 per:1 subpart:1 pollefeys:3 key:1 four:3 graph:12 sum:2 year:1 run:1 angle:1 uncertainty:8 raquel:1 place:1 almost:1 reader:1 decide:3 utilizes:1 decision:1 bound:1 hi:7 layer:1 followed:1 encountered:1 handful:1 constraint:1 scene:1 kwkp:2 speed:1 answered:1 min:3 performing:1 structured:31 popularized:1 craven:1 beneficial:1 describes:1 em:2 smaller:2 fathi:2 s1:2 intuitively:1 iccv:2 indexing:1 ceiling:1 computationally:4 ln:7 zurich:1 previously:3 describing:1 count:1 eventually:1 mind:1 know:2 end:2 available:1 experimentation:1 apply:1 observe:2 enforce:1 batch:6 alternative:2 altogether:1 existence:1 denotes:2 top:2 graphical:10 exploit:3 ghahramani:1 question:2 already:1 strategy:1 traditional:1 surrogate:4 gradient:3 subspace:2 distance:1 separate:18 originate:1 polytope:2 urtasun:6 toward:1 enforcing:1 assuming:1 rother:1 besides:1 code:1 index:1 minimizing:1 difficult:2 unfortunately:1 potentially:1 info:1 negative:2 stated:1 perform:5 bianchi:3 upper:1 ladner:2 markov:1 finite:1 descent:1 ecml:1 frame:4 gc:2 arbitrary:2 ttic:2 inferred:1 pair:7 required:6 specified:1 sentence:2 optimized:1 learned:3 nip:4 trans:1 able:1 suggested:1 beyond:1 indoor:1 summarize:2 program:2 rf:2 max:3 video:4 belief:7 suitable:1 natural:1 difficulty:3 rely:1 predicting:2 warm:1 buhmann:1 scheme:2 finished:3 extract:1 docking:1 review:1 prior:2 lethal:1 geometric:2 text:2 understanding:1 interdependent:1 manhattan:1 freund:1 fully:7 loss:9 querying:3 var:3 eccv:2 repeat:4 hebert:3 infeasible:1 drastically:1 allow:1 taking:4 face:1 benefit:1 distributed:2 depth:1 world:3 hedau:1 made:1 employing:2 approximate:5 dealing:2 cuboid:1 active:68 img:3 xi:8 discriminative:1 alternatively:2 latent:15 iterative:1 decomposes:1 additionally:1 kanade:2 learn:6 bethe:1 szepesvari:1 obtaining:3 improving:3 alg:11 du:11 investigated:2 domain:1 s2:2 augmented:2 fig:9 referred:1 advice:1 scheffer:1 tong:1 sub:1 pereira:1 exponential:1 answering:1 jmlr:2 wrobel:1 specific:2 showing:1 maxi:2 gupta:2 exists:2 intrinsic:1 intractable:1 sequential:3 importance:1 budget:1 sharkawi:1 margin:5 flavor:1 entropy:21 simply:1 visual:2 greiner:1 lagrange:4 partially:4 ch:1 truth:1 lewis:2 conditional:3 sized:2 goal:5 formulated:1 towards:4 room:5 feasible:1 hard:2 change:4 included:1 specifically:2 torr:1 reducing:3 hyperplane:1 wt:3 schwing:5 averaging:1 experimental:4 select:4 support:2 mark:1 latter:2 arises:1 alexander:1 ethz:1 incorporate:2 d1:1 |
4,369 | 4,954 | Low-Rank Matrix and Tensor Completion via
Adaptive Sampling
Akshay Krishnamurthy
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Aarti Singh
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We study low rank matrix and tensor completion and propose novel algorithms
that employ adaptive sampling schemes to obtain strong performance guarantees.
Our algorithms exploit adaptivity to identify entries that are highly informative
for learning the column space of the matrix (tensor) and consequently, our results
hold even when the row space is highly coherent, in contrast with previous analyses. In the absence of noise, we show that one can exactly recover a n ? n matrix
of rank r from merely ?(nr3/2 log(r)) matrix entries. We also show that one can
recover an order T tensor using ?(nrT 1/2 T 2 log(r)) entries. For noisy recovery, our algorithm consistently estimates a low rank matrix corrupted with noise
using ?(nr3/2 polylog(n)) entries. We complement our study with simulations
that verify our theory and demonstrate the scalability of our algorithms.
1
Introduction
Recently, the machine learning and signal processing communities have focused considerable attention toward understanding the benefits of adaptive sensing. This theme is particularly relevant to
modern data analysis, where adaptive sensing has emerged as an efficient alternative to obtaining
and processing the large data sets associated with scientific investigation. These empirical observations have lead to a number of theoretical studies characterizing the performance gains offered by
adaptive sensing over conventional, passive approaches. In this work, we continue in that direction
and study the role of adaptive data acquisition in low rank matrix and tensor completion problems.
Our study is motivated not only by prior theoretical results in favor of adaptive sensing but also
by several applications where adaptive sensing is feasible. In recommender systems, obtaining a
measurement amounts to asking a user about an item, an interaction that has been deployed in
production systems. Another application pertains to network tomography, where a network operator
is interested in inferring latencies between hosts in a communication network while injecting few
packets into the network. The operator, being in control of the network, can adaptively sample the
matrix of pair-wise latencies, potentially reducing the total number of measurements. In particular,
the operator can obtain full columns of the matrix by measuring from one host to all others, a
sampling strategy we will exploit in this paper.
Yet another example centers around gene expression analysis, where the object of interest is a matrix
of expression levels for various genes across a number of conditions. There are typically two types
of measurements: low-throughput assays provide highly reliable measurements of single entries
in this matrix while high-throughput microarrays provide expression levels of all genes of interest
across operating conditions, thus revealing entire columns. The completion problem can be seen
as a strategy for learning the expression matrix from both low- and high-throughput data while
minimizing the total measurement cost.
1
1.1
Contributions
We develop algorithms with theoretical guarantees for three low-rank completion problems. The
algorithms find a small subset of columns of the matrix (tensor) that can be used to reconstruct or
approximate the matrix (tensor). We exploit adaptivity to focus on highly informative columns, and
this enables us to do away with the usual incoherence assumptions on the row-space while achieving
competitive (or in some cases better) sample complexity bounds. Specifically our results are:
1. In the absence of noise, we develop a streaming algorithm that enjoys both low sample
requirements and computational overhead. In the matrix case, we show that ?(nr3/2 log r)
adaptively chosen samples are sufficient for exact recovery, improving on the best known
bound of ?(nr2 log2 n) in the passive setting [21]. This also gives the first guarantee for
matrix completion with coherent row space.
2. In the tensor case, we establish that ?(nrT 1/2 T 2 log r) adaptively chosen samples are
sufficient for recovering a n ? . . . ? n order T tensor of rank r. We complement this
with a necessary condition for tensor completion under random sampling, showing that
our adaptive strategy is competitive with any passive algorithm. These are the first sample
complexity upper and lower bounds for exact tensor completion.
3. In the noisy matrix completion setting, we modify the adaptive column subset selection
algorithm of Deshpande et al. [10] to give an algorithm that finds a rank-r approximation
to a matrix using ?(nr3/2 polylog(n)) samples. As before, the algorithm does not require
an incoherent row space but we are no longer able to process the matrix sequentially.
4. Along the way, we improve on existing results for subspace detection from missing data,
the problem of testing if a partially observed vector lies in a known subspace.
2
Related Work
The matrix completion problem has received considerable attention in recent years. A series of
papers [6, 7, 13, 21], culminating in Recht?s elegent analysis of the nuclear norm minimization program, address the exact matrix completion problem through the framework of convex optimization,
establishing that ?((n1 + n2 )r max{?0 , ?21 } log2 (n2 )) randomly drawn samples are sufficient to
exactly identify an n1 ? n2 matrix with rank r. Here ?0 and ?1 are parameters characterizing the
incoherence of the row and column spaces of the matrix, which we will define shortly. Candes and
Tao [7] proved that under random sampling ?(n1 r?0 log(n2 )) samples are necessary, showing that
nuclear norm minimization is near-optimal.
The noisy matrix completion problem has also received considerable attention [5, 17, 20]. The
majority of these results also involve some parameter that quantifies how much information a single
observation reveals, in the same vein as incoherence.
Tensor completion, a natural generalization of matrix completion, is less studied. One challenge
stems from the NP-hardness of computing most tensor decompositions, pushing researchers to study
alternative structure-inducing norms in lieu of the nuclear norm [11, 22]. Both papers derive algorithms for tensor completion, but neither provide sample complexity bounds for the noiseless case.
Our approach involves adaptive data acquisition, and consequently our work is closely related to
a number of papers focusing on using adaptive measurements to estimate a sparse vector [9, 15].
In these problems, specifically, problems where the sparsity basis is known a priori, we have a
reasonable understanding of how adaptive sampling can lead to performance improvements. As a
low rank matrix is sparse in its unknown eigenbasis, the completion problem is coupled with learning
this basis, which poses a new challenge for adaptive sampling procedures.
Another relevant line of work stems from the matrix approximations literature. Broadly speaking,
this research is concerned with efficiently computing a structured matrix, i.e. sparse or low rank,
that serves as a good approximation to a fully observed input matrix. Two methods that apply to
the missing data setting are the Nystrom method [12, 18] and entrywise subsampling [1]. While
the sample complexity bounds match ours, the analysis for the Nystrom method has focused on
positive-semidefinite kernel matrices and requires incoherence of both the row and column spaces.
On the other hand, entrywise subsampling is applicable, but the guarantees are weaker than ours.
2
It is also worth briefly mentioning the vast body of literature on column subset selection, the task
of approximating a matrix by projecting it onto a few of its columns. While the best algorithms,
namely volume sampling [14] and sampling according to statistical leverages [3], do not seem to be
readily applicable to the missing data setting, some algorithms are. Indeed our procedure for noisy
matrix completion is an adaptation of an existing column subset selection procedure [10].
Our techniques are also closely related to ideas employed for subspace detection ? testing whether a
vector lies in a known subspace ? and subspace tracking ? learning a time-evolving low-dimensional
subspace from vectors lying close to that subspace. Balzano et al. [2] prove guarantees for subspace
detection with known subspace and a partially observed vector, and we will improve on their result
en route to establishing our guarantees. Subspace tracking from partial information has also been
studied [16], but little is known theoretically about this problem.
3
Definitions and Preliminaries
Before presenting our algorithms, we clarify some notation and definitions. Let M 2 Rn1 ?n2 be a
rank r matrix with singular value decomposition U ?V T . Let c1 , . . . cn2 denote the columns of M .
Let M 2 Rn1 ?...?nT denote an order T tensor with canonical decomposition:
r
X
(1)
(2)
(T )
M=
ak ? ak ? . . . ? ak
(1)
k=1
where ? is the outer product. Define rank(M) to be the smallest value of r that establishes this
(t)
equality. Note that the vectors {ak }rk=1 need not be orthogonal, nor even linearly independent.
(t)
The mode-t subtensors of M, denoted Mi , are order T
1 tensors obtained by fixing the ith
(3)
coordinate of the t-th mode. For example, if M is an order 3 tensor, then Mi are the frontal slices.
We represent a d-dimensional subspace U ? Rn as a set of orthonormal basis vectors U = {ui }di=1
and in some cases as n ? d matrix whose columns are the basis vectors. The interpretation will be
clear from context. Define the orthogonal projection onto U as PU v = U (U T U ) 1 U T v.
For a set ? ? [n]1 , c? 2 R|?| is the vector whose elements are ci , i 2 ? indexed lexicographically.
Similarly the matrix U? 2 R|?|?d has rows indexed by ? lexicographically. Note that if U is a
orthobasis for a subspace, U? is a |?| ? d matrix with columns ui? where ui 2 U , rather than a set
of orthonormal basis vectors. In particular, the matrix U? need not have orthonormal columns.
These definitions extend to the tensor setting with slight modifications. We use the vec operation
T
to unfold a tensor into a single vector and
Q define the inner product hx, yi = vec(x) vec(y). For a
subspace U ? R?ni , we write it as a ( ni ) ? d matrix whose columns are vec(ui ), ui 2 U . We
can then define projections and subsampling as we did in the vector case.
As in recent work on matrix completion [7, 21], we will require a certain amount of incoherence
between the column space associated with M (M) and the standard basis.
Definition 1. The coherence of an r-dimensional subspace U ? Rn is:
n
?(U ) ,
max ||PU ej ||2
(2)
r 1?j?n
where ej denotes the jth standard basis element.
In previous analyses of matrix completion, the incoherence assumption is that both the row and column spaces of the matrix have coherences upper bounded by ?0 . When both spaces are incoherent,
each entry of the matrix reveals roughly the same amount of information, so there is little to be
gained from adaptive sampling, which typically involves looking for highly informative measurements. Thus the power of adaptivity for these problems should center around relaxing the incoherence assumption, which is the direction we take in this paper. Unfortunately, even under adaptive
sampling, it is impossible to identify a rank one matrix that is zero in all but one entry without observing the entire matrix, implying that we cannot completely eliminate the assumption. Instead, we
will retain incoherence on the column space, but remove the restrictions on the row space.
1
We write [n] for {1, . . . , n}
3
Algorithm 1: Sequential Tensor Completion (M, {mt }Tt=1 )
1. Let U = ;.
QT 1
QT 1
2. Randomly draw entries ? ? t=1 [nt ] uniformly with replacement w. p. mT / t=1 nt .
(T )
3. For each mode-T subtensor Mi
(a)
(T )
If ||Mi?
? (T )
i. M
i
ii. Ui
(t)
PU? Mi? ||22
> 0:
of M, i 2 [nT ]:
(T )
recurse on (Mi , {mt }Tt=11 )
? (T )
PU ? M
i
? (T ) ||
||PU ? M
i
? (T )
(b) Otherwise M
i
.U
U [ Ui .
U(U?T U? )
1
? with mode-T subtensors M
?i
4. Return M
4
(T )
U? Mi?
(T )
.
Exact Completion Problems
In the matrix case, our sequential algorithm builds up the column space of the matrix by selecting a
? and
few columns to observe in their entirety. In particular, we maintain a candidate column space U
?
?
test whether a column ci lives in U or not, choosing to completely observe ci and add it to U if it
does not. Balzano et al. [2] observed that we can perform this test with a subsampled version of ci ,
meaning that we can recover the column space using few samples. Once we know the column space,
recovering the matrix, even from few observations, amounts to solving determined linear systems.
For tensors, the algorithm becomes recursive in nature. At the outer level of the recursion, the
(T )
algorithm maintains a candidate subspace U for the mode T subtensors Mi . For each of these
(T )
subtensors, we test whether Mi lives in U and recursively complete that subtensor if it does not.
Once we complete the subtensor, we add it to U and proceed at the outer level. When the subtensor
itself is just a column; we observe the columns in its entirety.
The pseudocode of the algorithm is given in Algorithm 1. Our first main result characterizes the
performance of the tensor completion algorithm. We defer the proof to the appendix.
Pr
(t)
T
Theorem 2. Let M =
be a rank r order-T tensor with subspaces A(t) =
i=1 ?t=1 aj
(t)
span({aj }rj=1 ). Suppose that all of A(1) , . . . A(T 1) have coherence bounded above by ?0 . Set
mt = 36rt 1/2 ?t0 1 log(2r/ ) for each t. Then with probability 1 5 T rT , Algorithm 1 exactly
recovers M and has expected sample complexity
36(
T
X
nt )rT
1/2 T
?0
1
log(2r/ )
(3)
t=1
In the special case of a n ? . . . ? n tensor of order T , the algorithm succeeds with high probability
using ?(nrT 1/2 ?T0 1 T 2 log(T r/ )) samples, exhibiting a linear dependence
tensor dimen??Q on the
? ?
T1
sions. In comparison, the only guarantee we are aware of shows that ?
t=2 nt r samples are
sufficient for consistent estimation of a noisy tensor, exhibiting a much worse dependence on tensor
QT
dimension [23]. In the noiseless scenario, one can unfold the tensor into a n1 ? t=2 nt matrix
and apply any matrix completion algorithm. Unfortunately, without exploiting the additional tensor
QT
structure, this approach will scale with t=2 nt , which is similarly much worse than our guarantee.
Note that the na??ve procedure that does not perform the recursive step has sample complexity scaling
with the product of the dimensions and is therefore much worse than the our algorithm.
The most obvious specialization of Theorem 2 is to the matrix completion problem:
Corollary 3. Let M := U ?V T 2 Rn1 ?n2 have rank r, and fix > 0. Assume ?(U ) ? ?0 . Setting
m , m2 36r3/2 ?0 log( 2r ), the sequential algorithm exactly recovers M with probability at least
1 4r + while using in expectation
36n2 r3/2 ?0 log(2r/ ) + rn1
4
(4)
observations. The algorithm runs in O(n1 n2 r + r3 m) time.
A few comments are in order. Recht [21] guaranteed exact recovery for the nuclear norm minimiza2
2
tion procedure as long as the number of observations exceeds 32(n
1 +n2 )r max{?0 , ?1 } log (2n2 )
p
where controls the probability of failure and ||U V T ||1 ? ?1 r/(n1 n2 ) with
p ?1 as another coherence parameter. Without additional assumptions, ?1 can be as large as ?0 r. In this case, our
bound improves on his in its the dependence on r, ?0 and logarithmic terms.
The Nystrom method can also be applied to the matrix completion problem, albeit under nonuniform sampling. Given a PSD matrix, one uses a randomly sampled set of columns and the corresponding rows to approximate the remaining entries. Gittens showed that if one samples O(r log r)
columns, then one can exactly reconstruct a rank r matrix [12]. This result requires incoherence of
both row and column spaces, so it is more restrictive than ours. Almost all previous results for exact
matrix completion require incoherence of both row and column spaces.
The one exception is a recent paper by Chen et al. that we became aware of while preparing the
final version of this work [8]. They show that sampling the matrix according to statistical leverages
of the rows and columns can eliminate the need for incoherence assumptions. Specifically, when the
matrix has incoherent column space, they show that by first estimating the leverages of the columns,
sampling the matrix according to this distribution, and then solving the nuclear norm minimization
program, one can recover the matrix with ?(nr?0 log2 n) samples. Our result improves on theirs
p
when r is small compared to n, specifically when r log r ? log2 n, which is common.
Our algorithm is also very computationally efficient. Existing algorithms involve successive singular
value decompositions (O(n1 n2 r) per iteration), resulting in much worse running times.
The key ingredient in our proofs is a result pertaining to subspace detection, the task of testing if
a subsampled vector lies in a subspace. This result, which improves over the results of Balzano et
al. [2], is crucial in obtaining our sample complexity bounds, and may be of independent interest.
Theorem 4. Let U be a d-dimensional subspace of Rn and y = x + v where x 2 U and v 2 U ? .
8
2d
Fix > 0, m
and let ? be an index set with entries sampled uniformly with
3 d?(U ) log
replacement with probability m/n. Then with probability at least 1 4 :
m(1
?)
d?(U ) (1
n
)
||v||22 ? ||y?
q
?(v)
Where ? =
2 ?(v)
m log(1/ ) + 2 3m log(1/ ),
q
8d?(U )
2
2
3m log(2d/ ) and ?(v) = n||v||1 /||v||2 .
PU? y? ||22 ? (1 + ?)
= 6 log(d/ ) +
m
||v||22
n
4 d?(v)
3 m
log2 (d/ ),
(5)
=
p
This theorem shows that if m = ?(max{?(v), d?(U ), d ?(U )?(v)} log d) then the orthogonal
projection from missing data is within a constant factor of the fully observed one. In contrast,
Balzano et al. [2] give a similar result that requires m = ?(max{?(v)2 , d?(U ), d?(U )?(v)} log d)
to get a constant factor approximation. In the matrix case, this improved dependence on incoherence
3/2
parameters brings our sample complexity down from nr2 ?20 log
p r to nr ?0 log r. We conjecture
that this theorem can be further improved to eliminate another r factor from our final bound.
4.1
Lower Bounds for Uniform Sampling
We adapt the proof strategy of Candes and Tao [7] to the tensor completion problem and establish
the following lower bound for uniform sampling:
Theorem 5 (Passive Lower Bound). Fix 1 ? m, r ? mint nt and ?0 > 1. Fix 0 < < 1/2 and
suppose that we do not have the condition:
!
?n ?
m
?T0 1 rT 1
1
log 1 QT
log
(6)
QT
2
i=1 ni
i=2 ni
Then there exist infinitely many pairs of distinct n1 ? . . . ? nT order-T tensors M 6= M0 of rank r
with coherence parameter ? ?0 such that P? (M) = P? (M0 ) with probability at least . Each entry
is observed independently with probability T = QTm n .
i=1
5
i
Theorem 5 implies that as long as the right hand side of Equation 6 is at most ? < 1, and:
?n ?
1
m ? n1 rT 1 ?T0 1 log
(1 ?/2)
(7)
2
then with probability at least there are infinitely many matrices that agree on the observed entries.
This gives a necessary condition on the number of samples required for tensor completion. Note
that when T = 2 we recover the known lower bound for matrix completion.
Theorem 5 gives a necessary condition under uniform sampling. Comparing with Theorem 2 shows
that our procedure outperforms any passive procedure in its dependence on the tensorpdimensions.
However, our guarantee is suboptimal in its dependence on r. The extra factor of r would be
eliminated by a further improvement to Theorem 5, which we conjecture is indeed possible.
For adaptive sampling, one can obtain a lower bound via a parameter counting argument. Observing
P Q (t)
the (i1 , .P
. . , iT )th entry leads to a polynomial equation of the form
P k t ak (it ) = Mi1 ,...,iT . If
m < r( t nt ), this system is underdetermined showing that ?(( t nt )r) observations are necessary for exact recovery, even under adaptive sampling. Thus, our algorithm enjoys sample complexity with optimal dependence on matrix dimensions.
5
Noisy Matrix Completion
Our algorithm for noisy matrix completion is an adaptation of the column subset selection (CSS)
algorithm analyzed by Deshpande et al. [10]. The algorithm builds a candidate column space in
rounds; at each round it samples additional columns with probability proportional to their projection
on the orthogonal complement of the candidate column space.
To concretely describe the algorithm, suppose that at the beginning of the lth round we have a
candidate subspace Ul . Then in the lth round, we draw s additional columns according to the
distribution where the probability of drawing the ith column is proportional to ||PUl? ci ||22 . Observing
these s columns in full and then adding them to the subspace Ul gives the candidate subspace Ul+1
for the next round. We initialize the algorithm with U1 = ;. After L rounds, we approximate each
T
T
?.
column c with c? = UL (UL?
UL? ) 1 UL?
c? and concatenate these estimates to form M
The challenge is that the algorithm cannot compute the sampling probabilities without observing
entries of the matrix. However, our results show that with reliable estimates, which can be computed
from few observations, the algorithm still performs well.
We assume that the matrix M 2 Rn1 ?n2 can be decomposed as a rank r matrix A and a random
gaussian matrix R whose entries are independently drawn from N (0, 2 ). We write A = U ?V T
and assume that ?(U ) ? ?0 . As before, the incoherence assumption is crucial in guaranteeing that
one can estimate the column norms, and consequently sampling probabilities, from missing data.
Theorem 6. Let ? be the set of all observations over the course of the algorithm, let UL
? be the matrix whose columns
be the subspace obtained after L = log(n1 n2 ) rounds and M
T
T
c?i = UL (UL?
UL? ) 1 UL?
c?i . Then there are constants c1 , c2 such that:
? ||2F ? c1 ||A||2F + c2 ||R? ||2F
||A M
(n1 n2 )
? can be computed from ?((n1 + n2 )r3/2 ?(U )polylog(n1 n2 )) observations. In particular, if
M
||A||2F = 1 and Rij ? N (0, 2 /(n1 n2 )), then there is a constant c? for which:
?
?
??
? 2 ? c?
||A A||
1 + 2 (n1 + n2 )r3/2 ?(U )polylog(n1 n2 )
F
n1 n2
The main improvement in the result is in relaxing the assumptions on the underlying matrix A.
Existing results for noisy matrix completion require that the energy of the matrix is well spread
out across both the rows and the columns (i.e. incoherence), and the sample complexity guarantees
deteriorate significantly without such an assumption [5, 17]. As a concrete example, Negahban and
p
p
1
Wainwright [20] use a notion of spikiness, measured as n1 n2 ||A||
n2
||A||F which can be as large as
in our setup, e.g. when the matrix is zero except for on one column and constant across that column.
6
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 1: Probability of success curves for our noiseless matrix completion algorithm (top) and
SVT (middle). Top: Success probability as a function of: Left: p, the fraction of samples per
column, Center: np, total samples per column, and Right: np log2 n, expected samples per column
for passive completion. Bottom: Success probability of our noiseless algorithm for different values
of r as a function of p, the fraction of samples per column (left), p/r3/2 (middle) and p/r (right).
The choices of ||A||2F = 1 and noise variance rescaled by n11n2 enable us to compare our results
with related work [20]. Thinking of n1 = n2 = n and the
? incoherence
? parameter as a constant, our
n
2
results imply consistent estimation as long as
= ! r2 polylog(n) . On the other hand, thinking
2
of the spikiness parameter as a constant, [20] show that the error is bounded by nrmlog n where
m is the total number of observations. Using the same number of samples as our procedure, their
results implies consistency as long as 2 = !(rpolylog(n)). For small r (i.e. r = O(1)), our noise
tolerance is much better, but their results apply even with fewer observations, while ours do not.
6
Simulations
We verify Corollary 3?s linear dependence on n in Figure 1, where we empirically compute the
success probability of the algorithm for varying values of n and p = m/n, the fraction of entries
observed per column. Here we study square matrices of fixed rank r = 5 with ?(U ) = 1. Figure 1(a)
shows that our algorithm can succeed with sampling a smaller and smaller fraction of entries as n
increases, as we expect from Corollary 3. In Figure 1(b), we instead plot success probability against
total number of observations per column. The fact that the curves coincide suggests that the samples
per column, m, is constant with respect to n, which is precisely what Corollary 3 implies. Finally,
in Figure 1(c), we rescale instead by n/ log2 n, which corresponds to the passive sample complexity
bound [21]. Empirically, the fact that these curves do not line up demonstrates that our algorithm
requires fewer than log2 n samples per column, outperforming the passive bound.
The second row of Figure 1 plots the same probability of success curves for the Singular Value
Thresholding (SVT) algorithm [4]. As is apparent from the plots, SVT does not enjoy a linear
dependence on n; indeed Figure 1(f) confirms the logarithmic dependency that we expect for passive
matrix completion, and establishes that our algorithm has empirically better performance.
7
n
1000
5000
10000
Figure 2: Reconstruction error as a function of
row space incoherence for our noisy algorithm
(CSS) and the semidefinite program of [20].
Unknown M
r
m/dr
10
3.4
50
3.3
100
3.2
10
3.4
50
3.5
100
3.4
10
3.4
50
3.5
100
3.5
m/n2
0.07
0.33
0.61
0.01
0.07
0.14
0.01
0.03
0.07
Results
time (s)
16
29
45
3
27
104
10
84
283
Table 1: Computational results on large lowrank matrices. dr = r(2n r) is the degrees of
freedom, so m/dr is the oversampling ratio.
In the third row, we study the algorithm?s dependence on r on 500 ? 500 square matrices. In Figure 1(g) we plot the probability of success of the algorithm as a function of the sampling probability
p for matrices of various rank, and observe that the sample complexity increases with r. In Figure 1(h) we rescale the x-axis by r 3/2 so that if our theorem is tight, the curves should coincide. In
Figure 1(i) we instead rescale the x-axis by r 1 corresponding to our conjecture about the performance of the algorithm. Indeed, the curves line up in Figure 1(i), demonstrating that empirically, the
number of samples needed per column is linear in r rather than the r3/2 dependence in our theorem.
To confirm the computational improvement over existing methods, we ran our matrix completion
algorithm on large-scale matrices, recording the running time and error in Table 1. To contrast with
SVT, we refer the reader to Table 5.1 in [4]. As an example, recovering a 10000 ? 10000 matrix of
rank 100 takes close to 2 hours with the SVT, while it takes less than 5 minutes with our algorithm.
For the noisy algorithm, we study the dependence on row-space incoherence. In Figure 2, we plot the
reconstruction error as a function of the row space coherence for our procedure and the semidefinite
program of Negahban and Wainwright [20], where we ensure that both algorithms use the same
number of observations. It?s readily apparent that the SDP decays in performance as the row space
becomes more coherent while the performance of our procedure is unaffected.
7
Conclusions and Open Problems
In this work, we demonstrate how sequential active algorithms can offer significant improvements
in time, and measurement overhead over passive algorithms for matrix and tensor completion. We
hope our work motivates further study of sequential active algorithms for machine learning.
Several interesting theoretical questions arise from our work:
1. Can we tighten the dependence on rank for these problems? In particular, can we bring the
dependence on r down from r3/2 to linear? Simulations suggest this is possible.
2. Can one generalize the nuclear norm minimization program for matrix completion to the
tensor completion setting while providing theoretical guarantees on sample complexity?
We hope to pursue these directions in future work.
Acknowledgements
This research is supported in part by AFOSR under grant FA9550-10-1-0382 and NSF under grant
IIS-1116458. AK is supported in part by a NSF Graduate Research Fellowship. AK would like to
thank Martin Azizyan, Sivaraman Balakrishnan and Jayant Krishnamurthy for fruitful discussions.
References
[1] Dimitris Achlioptas and Frank Mcsherry. Fast computation of low-rank matrix approximations.
Journal of the ACM (JACM), 54(2):9, 2007.
8
[2] Laura Balzano, Benjamin Recht, and Robert Nowak. High-dimensional matched subspace
detection when data are missing. In Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 1638?1642. IEEE, 2010.
[3] Christos Boutsidis, Michael W Mahoney, and Petros Drineas. An improved approximation
algorithm for the column subset selection problem. In Proceedings of the twentieth Annual
ACM-SIAM Symposium on Discrete Algorithms, pages 968?977. Society for Industrial and
Applied Mathematics, 2009.
[4] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm
for matrix completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010.
[5] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE,
98(6):925?936, 2010.
[6] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization.
Foundations of Computational mathematics, 9(6):717?772, 2009.
[7] Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix
completion. Information Theory, IEEE Transactions on, 56(5):2053?2080, 2010.
[8] Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, and Rachel Ward. Coherent matrix
completion. arXiv preprint arXiv:1306.2979, 2013.
[9] Mark A Davenport and Ery Arias-Castro. Compressive binary search. In Information Theory
Proceedings (ISIT), 2012 IEEE International Symposium on, pages 1827?1831. IEEE, 2012.
[10] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation
and projective clustering via volume sampling. Theory of Computing, 2:225?247, 2006.
[11] Silvia Gandy, Benjamin Recht, and Isao Yamada. Tensor completion and low-n-rank tensor
recovery via convex optimization. Inverse Problems, 27(2):025010, 2011.
[12] Alex Gittens. The spectral norm error of the naive nystrom extension. arXiv preprint
arXiv:1110.5305, 2011.
[13] David Gross. Recovering low-rank matrices from few coefficients in any basis. Information
Theory, IEEE Transactions on, 57(3):1548?1566, 2011.
[14] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 1207?1214. SIAM, 2012.
[15] Jarvis D Haupt, Richard G Baraniuk, Rui M Castro, and Robert D Nowak. Compressive
distilled sensing: Sparse recovery using adaptivity in compressive measurements. In Signals,
Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on,
pages 1551?1555. IEEE, 2009.
[16] Jun He, Laura Balzano, and John Lui. Online robust subspace tracking from partial information. arXiv preprint arXiv:1109.3827, 2011.
[17] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from
noisy entries. The Journal of Machine Learning Research, 99:2057?2078, 2010.
[18] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling methods for the nystr?om
method. The Journal of Machine Learning Research, 98888:981?1006, 2012.
[19] B?eatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model
selection. The annals of Statistics, 28(5):1302?1338, 2000.
[20] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix
completion: Optimal bounds with noise. The Journal of Machine Learning Research, 2012.
[21] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning
Research, 7777777:3413?3430, 2011.
[22] Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. Estimation of low-rank tensors via
convex optimization. arXiv preprint arXiv:1010.0789, 2010.
[23] Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, and Hisashi Kashima. Statistical performance of
convex tensor decomposition. In Advances in Neural Information Processing Systems, pages
972?980, 2011.
[24] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027, 2010.
9
| 4954 |@word version:2 middle:2 briefly:1 polynomial:1 norm:9 open:1 confirms:1 simulation:3 decomposition:5 nystr:1 recursively:1 series:1 selecting:1 ours:4 outperforms:1 existing:5 comparing:1 nt:12 yet:1 luis:1 readily:2 john:1 concatenate:1 sanjiv:1 informative:3 enables:1 remove:1 plot:5 implying:1 fewer:2 item:1 beginning:1 ith:2 yamada:1 fa9550:1 record:1 successive:1 simpler:1 along:1 c2:2 symposium:4 prove:1 overhead:2 dimen:1 deteriorate:1 theoretically:1 indeed:4 expected:2 andrea:1 cand:3 nor:1 sdp:1 hardness:1 roughly:1 decomposed:1 little:2 becomes:2 estimating:1 notation:1 bounded:3 underlying:1 matched:1 bhojanapalli:1 what:1 pursue:1 compressive:3 guarantee:11 exactly:5 demonstrates:1 control:2 grant:3 enjoy:1 sinop:1 before:3 positive:1 t1:1 svt:5 modify:1 ak:7 establishing:2 laurent:1 incoherence:17 studied:2 suggests:1 relaxing:2 mentioning:1 projective:1 graduate:1 testing:3 recursive:2 procedure:10 unfold:2 empirical:1 evolving:1 kohei:2 significantly:1 revealing:1 projection:4 suggest:1 get:1 onto:2 close:2 selection:6 operator:3 cannot:2 context:1 impossible:1 restriction:1 conventional:1 fruitful:1 center:3 missing:6 attention:3 independently:2 convex:6 focused:2 shen:1 recovery:6 m2:1 nuclear:6 orthonormal:3 oh:1 his:1 notion:1 coordinate:1 krishnamurthy:2 cs:2 annals:1 suppose:3 user:1 exact:8 us:1 pa:2 element:2 particularly:1 taiji:1 vein:1 observed:8 role:1 bottom:1 preprint:5 rij:1 wang:1 rescaled:1 ran:1 gross:1 benjamin:4 convexity:1 complexity:13 ui:7 singh:1 solving:2 tight:1 ali:1 basis:8 completely:2 drineas:1 various:2 distinct:1 fast:1 describe:1 pertaining:1 choosing:1 balzano:6 emerged:1 whose:5 apparent:2 drawing:1 reconstruct:2 otherwise:1 isao:1 favor:1 statistic:1 ward:1 noisy:11 itself:1 final:2 online:1 cai:1 propose:1 reconstruction:3 interaction:1 product:3 adaptation:2 jarvis:1 relevant:2 inducing:1 scalability:1 eigenbasis:1 exploiting:1 yaniv:1 requirement:1 rademacher:1 guaranteeing:1 object:1 polylog:5 develop:2 completion:48 fixing:1 pose:1 derive:1 measured:1 lowrank:1 rescale:3 qt:6 received:2 strong:2 recovering:4 c:2 involves:2 culminating:1 entirety:2 implies:3 exhibiting:2 direction:3 closely:2 packet:1 enable:1 require:4 hx:1 fix:4 generalization:1 investigation:1 preliminary:1 isit:2 underdetermined:1 extension:1 clarify:1 hold:1 lying:1 aartisingh:1 around:2 m0:2 smallest:1 aarti:1 estimation:4 injecting:1 applicable:2 sivaraman:1 establishes:2 weighted:1 minimization:4 hope:2 gaussian:1 nr3:4 rather:2 ej:2 sion:1 varying:1 corollary:4 focus:1 improvement:5 consistently:1 rank:28 contrast:3 industrial:1 talwalkar:1 streaming:1 gandy:1 typically:2 entire:2 eliminate:3 interested:1 tao:3 i1:1 pascal:1 denoted:1 priori:1 plan:1 special:1 initialize:1 santosh:1 once:2 aware:2 distilled:1 sampling:25 eliminated:1 preparing:1 throughput:3 thinking:2 future:1 others:1 np:3 sanghavi:1 roman:1 richard:1 employ:1 few:8 randomly:3 modern:1 ve:1 subsampled:2 replacement:2 raghunandan:1 n1:19 subtensors:4 maintain:1 psd:1 detection:5 freedom:1 interest:3 highly:5 mahoney:1 analyzed:1 recurse:1 semidefinite:3 mcsherry:1 nowak:2 partial:2 necessary:5 orthogonal:4 indexed:2 theoretical:5 column:58 asking:1 measuring:1 cost:1 nr2:2 entry:18 subset:6 uniform:3 dependency:1 corrupted:1 vershynin:1 adaptively:3 recht:6 international:2 negahban:3 siam:4 retain:1 terence:1 michael:1 concrete:1 na:1 rn1:5 davenport:1 dr:3 worse:4 laura:2 return:1 hisashi:2 coefficient:1 tion:1 observing:4 characterizes:1 competitive:2 recover:5 maintains:1 candes:3 ery:1 defer:1 contribution:1 om:1 square:2 ni:4 became:1 variance:1 efficiently:1 identify:3 generalize:1 worth:1 researcher:1 unaffected:1 definition:4 failure:1 against:1 energy:1 acquisition:2 boutsidis:1 deshpande:3 obvious:1 nystrom:4 associated:2 mi:9 di:1 proof:3 recovers:2 gain:1 sampled:2 proved:1 petros:1 improves:3 focusing:1 improved:3 entrywise:2 just:1 achlioptas:1 hand:3 keshavan:1 mode:5 brings:1 aj:2 scientific:1 verify:2 equality:1 assay:1 round:7 presenting:1 tt:2 demonstrate:2 complete:2 performs:1 bring:1 passive:10 meaning:1 wise:1 novel:1 recently:1 common:1 pseudocode:1 functional:1 mt:4 empirically:4 volume:2 extend:1 interpretation:1 slight:1 he:1 theirs:1 mellon:2 measurement:9 refer:1 significant:1 vec:4 sujay:1 consistency:1 mathematics:2 similarly:2 longer:1 operating:1 pu:6 add:2 recent:3 showed:1 mint:1 scenario:1 route:1 certain:1 outperforming:1 continue:1 success:7 life:2 binary:1 yi:1 seen:1 additional:4 zuowei:1 employed:1 forty:1 venkatesan:1 signal:2 ii:2 full:2 rj:1 stem:2 exceeds:1 match:1 lexicographically:2 adapt:1 offer:1 long:4 host:2 noiseless:4 cmu:2 expectation:1 arxiv:10 iteration:1 kernel:1 represent:1 c1:3 fellowship:1 spikiness:2 singular:4 jian:1 crucial:2 extra:1 massart:1 comment:1 recording:1 balakrishnan:1 seem:1 near:2 leverage:3 counting:1 concerned:1 suboptimal:1 inner:1 idea:1 nrt:3 microarrays:1 t0:4 whether:3 motivated:1 expression:4 specialization:1 guruswami:1 sahand:1 ul:12 speaking:1 proceed:1 latency:2 clear:1 involve:2 amount:4 tomography:1 exist:1 canonical:1 oversampling:1 nsf:2 per:10 broadly:1 carnegie:2 write:3 discrete:2 key:1 demonstrating:1 achieving:1 drawn:2 neither:1 vast:1 relaxation:1 merely:1 fraction:4 year:1 run:1 inverse:1 baraniuk:1 rachel:1 almost:1 reasonable:1 reader:1 draw:2 coherence:6 appendix:1 scaling:1 bound:16 guaranteed:1 quadratic:1 annual:2 precisely:1 alex:1 u1:1 argument:1 span:1 kumar:1 vempala:1 ameet:1 martin:2 conjecture:3 department:2 structured:1 according:4 across:4 smaller:2 gittens:2 modification:1 castro:2 projecting:1 restricted:1 pr:1 asilomar:1 computationally:1 equation:2 agree:1 r3:8 needed:1 know:1 serf:1 lieu:1 operation:1 apply:3 observe:4 away:1 spectral:1 kashima:2 alternative:2 shortly:1 denotes:1 remaining:1 subsampling:3 running:2 top:2 ensure:1 log2:8 clustering:1 pushing:1 exploit:3 restrictive:1 emmanuel:4 build:2 establish:2 approximating:1 society:1 amit:1 feng:1 tensor:38 question:1 strategy:4 rt:5 usual:1 dependence:14 nr:2 subspace:25 thank:1 majority:1 outer:3 toward:1 index:1 ratio:1 minimizing:1 providing:1 kemal:1 setup:1 unfortunately:2 robert:2 potentially:1 frank:1 ryota:2 motivates:1 unknown:2 perform:2 twenty:1 recommender:1 upper:2 observation:13 communication:1 looking:1 rn:3 nonuniform:1 community:1 david:1 complement:3 pair:2 namely:1 required:1 coherent:4 hour:1 address:1 able:1 dimitris:1 sparsity:1 challenge:3 program:5 reliable:2 max:5 wainwright:3 power:2 natural:1 recursion:1 scheme:1 improve:2 mi1:1 imply:1 axis:2 incoherent:3 jun:1 coupled:1 naive:1 prior:1 understanding:2 literature:2 acknowledgement:1 asymptotic:1 afosr:1 fully:2 expect:2 haupt:1 adaptivity:4 interesting:1 proportional:2 ingredient:1 foundation:1 cn2:1 degree:1 offered:1 sufficient:4 consistent:2 thresholding:2 sewoong:1 azizyan:1 production:1 row:20 course:1 mohri:1 supported:2 jth:1 enjoys:2 side:1 weaker:1 akshaykr:1 characterizing:2 akshay:1 sparse:4 benefit:1 slice:1 curve:6 dimension:3 tolerance:1 yudong:1 concretely:1 suzuki:1 adaptive:19 coincide:2 tighten:1 transaction:2 approximate:3 gene:3 confirm:1 sequentially:1 reveals:2 active:2 pittsburgh:2 search:1 quantifies:1 table:3 jayant:1 nature:1 robust:1 obtaining:3 improving:1 mehryar:1 did:1 main:2 spread:1 linearly:1 montanari:1 silvia:1 noise:7 arise:1 n2:25 body:1 en:1 deployed:1 christos:1 tomioka:2 theme:1 inferring:1 pul:1 lie:3 candidate:6 third:3 srinadh:1 rk:1 theorem:13 down:2 minute:1 showing:3 sensing:6 r2:1 decay:1 albeit:1 sequential:5 adding:1 gained:1 ci:5 aria:1 rui:1 chen:2 logarithmic:2 jacm:1 infinitely:2 twentieth:1 tracking:3 partially:2 hayashi:2 corresponds:1 acm:3 succeed:1 lth:2 consequently:3 absence:2 considerable:3 feasible:1 specifically:4 determined:1 reducing:1 uniformly:2 except:1 lui:1 total:5 e:3 succeeds:1 exception:1 mark:1 pertains:1 frontal:1 |
4,370 | 4,955 | Adaptive Submodular Maximization in Bandit Setting
Victor Gabillon
Branislav Kveton
Zheng Wen
INRIA Lille - team SequeL
Technicolor Labs
Electrical Engineering Department
Villeneuve d?Ascq, France
Palo Alto, CA
Stanford University
[email protected] [email protected] [email protected]
Brian Eriksson
Technicolor Labs
Palo Alto, CA
[email protected]
S. Muthukrishnan
Department of Computer Science
Rutgers
[email protected]
Abstract
Maximization of submodular functions has wide applications in machine learning
and artificial intelligence. Adaptive submodular maximization has been traditionally studied under the assumption that the model of the world, the expected gain
of choosing an item given previously selected items and their states, is known. In
this paper, we study the setting where the expected gain is initially unknown, and
it is learned by interacting repeatedly with the optimized function. We propose an
efficient algorithm for solving our problem and prove that its expected cumulative
regret increases logarithmically with time. Our regret bound captures the inherent
property of submodular maximization, earlier mistakes are more costly than later
ones. We refer to our approach as Optimistic Adaptive Submodular Maximization
(OASM) because it trades off exploration and exploitation based on the optimism in
the face of uncertainty principle. We evaluate our method on a preference elicitation problem and show that non-trivial K-step policies can be learned from just a
few hundred interactions with the problem.
1
Introduction
Maximization of submodular functions [14] has wide applications in machine learning and artificial
intelligence, such as social network analysis [9], sensor placement [10], and recommender systems
[7, 2]. In this paper, we study the problem of adaptive submodular maximization [5]. This problem
is a variant of submodular maximization where each item has a state and this state is revealed when
the item is chosen. The goal is to learn a policy that maximizes the expected return for choosing K
items.
Adaptive submodular maximization has been traditionally studied in the setting where the model of
the world, the expected gain of choosing an item given previously selected items and their states, is
known. This is the first paper that studies the setting where the model is initially unknown, and it is
learned by interacting repeatedly with the environment. We bring together the concepts of adaptive
submodular maximization and bandits, and the result is an efficient solution to our problem.
We make four major contributions. First, we propose a model where the expected gain of choosing
an item can be learned efficiently. The main assumption in the model is that the state of each item is
distributed independently of the other states. Second, we propose Optimistic Adaptive Submodular
Maximization (OASM), a bandit algorithm that selects items with the highest upper confidence bound
on the expected gain. This algorithm is computationally efficient and easy to implement. Third, we
prove that the expected cumulative regret of our algorithm increases logarithmically with time. Our
regret bound captures the inherent property of adaptive submodular maximization, earlier mistakes
are more costly than later ones. Finally, we apply our approach to a real-world preference elicitation
1
problem and show that non-trivial policies can be learned from just a few hundred interactions with
the problem.
2
Adaptive Submodularity
In adaptive submodular maximization, the objective is to maximize, under constraints, a function of
the form:
L
f : 2I ? {?1, 1} ? R,
(1)
where I = {1, . . . , L} is a set of L items and 2I is its power set. The first argument of f is a subset
L
of chosen items A ? I. The second argument is the state ? ? {?1, 1} of all items. The i-th entry
of ?, ?[i], is the state of item i. The state ? is drawn i.i.d. from some probability distribution P (?).
The reward for choosing items A in state ? is f (A, ?). For simplicity of exposition, we assume that
f (?, ?) = 0 in all ?. In problems of our interest, the state is only partially observed. To capture this
L
phenomenon, we introduce the notion of observations. An observation is a vector y ? {?1, 0, 1}
whose non-zero entries are the observed states of items. We say that y is an observation of state ?,
and write ? ? y, if y[i] = ?[i] in all non-zero entries of y. Alternatively, the state ? can be viewed
as a realization of y, one of many. We denote by dom (y) = {i : y[i] 6= 0} the observed items in y
and by ?hAi the observation of items A in state ?. We define a partial ordering on observations and
write y0 y if y0 [i] = y[i] in all non-zero entries of y, y0 is a more specific observation than y. In
the terminology of Golovin and Krause [5], y is a subrealization of y0 .
We illustrate our notation on a simple example. Let ? = (1, 1, ?1) be a state, and y1 = (1, 0, 0) and
y2 = (1, 0, ?1) be observations. Then all of the following claims are true:
? ? y1 , ? ? y2 , y2 y1 , dom (y2 ) = {1, 3} , ?h{1, 3}i = y2 , ?hdom (y1 )i = y1 .
Our goal is to maximize the expected value of f by adaptively choosing K items. This problem can
be viewed as a K step game, where at each step we choose an item according to some policy ? and
L
then observe its state. A policy ? : {?1, 0, 1} ? I is a function from observations y to items. The
observations represent our past decisions and their outcomes. A k-step policy in state ?, ?k (?), is a
collection of the first k items chosen by policy ?. The policy is defined recursively as:
?k (?) = ?k?1 (?) ? ?[k] (?) ,
?[k] (?) = ?(?h?k?1 (?)i),
?0 (?) = ?,
(2)
where ?[k] (?) is the k-th item chosen by policy ? in state ?. The optimal K-step policy satisfies:
? ? = arg max? E? [f (?K (?), ?)] .
(3)
In general, the problem of computing ? ? is NP-hard [14, 5]. However, near-optimal policies can be
computed efficiently when the maximized function has a diminishing return property. Formally, we
require that the function is adaptive submodular and adaptive monotonic [5].
Definition 1. Function f is adaptive submodular if:
E? [ f (A ? {i} , ?) ? f (A, ?) | ? ? yA ] ? E? [ f (B ? {i} , ?) ? f (B, ?) | ? ? yB ]
for all items i ? I \ B and observations yB yA , where A = dom (yA ) and B = dom (yB ).
Definition 2. Function f is adaptive monotonic if E? [ f (A ? {i} , ?) ? f (A, ?) | ? ? yA ] ? 0 for
all items i ? I \ A and observations yA , where A = dom (yA ).
In other words, the expected gain of choosing an item is always non-negative and does not increase
as the observations become more specific.
Let ? g be the greedy policy for maximizing f , a policy that always selects the item with the highest
expected gain:
? g (y) = arg max gi (y),
(4)
i?I\dom(y)
where:
gi (y) = E? [ f (dom (y) ? {i} , ?) ? f (dom (y) , ?) | ? ? y ]
(5)
is the expected gain of choosing item i after observing y. Then, based on the result of Golovin and
g
?
Krause [5], ? g is a (1 ? 1/e)-approximation to ? ? , E? [f (?K
(?), ?)] ? (1 ? 1/e)E? [f (?K
(?), ?)],
if f is adaptive submodular and adaptive monotonic. In the rest of this paper, we say that an observation y is a context if it can be observed under the greedy policy ? g . Specifically, there exist k and
? such that y = ?h?kg (?)i.
2
3
Adaptive Submodularity in Bandit Setting
The greedy policy ? g can be computed only if the objective function f and the distribution of states
P (?) are known, because both of these quantities are needed to compute the marginal benefit gi (y)
(Equation 5). In practice, the distribution P (?) is often unknown, for instance in a newly deployed
sensor network where the failure rates of the sensors are unknown. In this paper, we study a natural
variant of adaptive submodular maximization that can model such problems. The distribution P (?)
is assumed to be unknown and we learn it by interacting repeatedly with the problem.
3.1
Model
The problem of learning P (?) can be cast in many ways. One approach is to directly learn the joint
P (?). This approach is not practical for two reasons. First, the number of states ? is exponential in
the number of items L. Second, the state of our problem is observed only partially. As a result, it is
generally impossible to identify the distribution that generates ?. Another possibility is to learn the
probability of individual states ?[i] conditioned on context, observations y under the greedy policy
? g in up to K steps. This is impractical because the number of contexts is exponential in K.
Clearly, additional structural assumptions are necessary to obtain a practical solution. In this paper,
we assume that the states of items are independent of the context in which the items are chosen. In
particular, the state ?[i] of each item i is drawn i.i.d. from a Bernoulli distribution with mean pi . In
this setting, the joint probability distribution factors as:
P (? = ?) =
L
Y
1{?[i]=1}
pi
(1 ? pi )1?1{?[i]=1}
(6)
i=1
and the problem of learning P (?) reduces to estimating L parameters, the means of the Bernoullis.
A major question is how restrictive is our independence assumption. We argue that this assumption
is fairly natural in many applications. For instance, consider a sensor network where the sensors fail
at random due to manufacturing defects. The failures of these sensors are independent of each other
and thus can be modeled in our framework. To validate our assumption, we conduct an experiment
(Section 4) that shows that it does not greatly affect the performance of our method on a real-world
problem. Correlations obviously exist and we discuss how to model them in Section 6.
Based on the independence assumption, we rewrite the expected gain (Equation 5) as:
gi (y) = pi g?i (y),
(7)
g?i (y) = E? [ f (dom (y) ? {i} , ?) ? f (dom (y) , ?) | ? ? y, ?[i] = 1 ]
(8)
where:
is the expected gain when item i is in state 1. For simplicity of exposition, we assume that the gain
is zero when the item is in state ?1. We discuss how to relax this assumption in Appendix.
In general, the gain g?i (y) depends on P (?) and thus cannot be computed when P (?) is unknown.
In this paper, we assume that g?i (y) can be computed without knowing P (?). This scenario is quite
common in practice. In maximum coverage problems, for instance, it is quite reasonable to assume
that the covered area is only a function of the chosen items and their states. In other words, the gain
can be computed as g?i (y) = f (dom (y) ? {i} , ?) ? f (dom (y) , ?), where ? is any state such that
? ? y and ?[i] = 1.
Our learning problem comprises n episodes. In episode t, we adaptively choose K items according
to some policy ? t , which may differ from episode to P
episode. The quality of the policy is measured
n
t
by the expected cumulative K-step return E?1 ,...,?n [ t=1 f (?K
(?t ), ?t )]. We compare this return
g
to that of the greedy policy ? and measure the difference between the two returns by the expected
cumulative regret:
" n
#
" n
#
X
X
g
t
R(n) = E?1 ,...,?n
Rt (?t ) = E?1 ,...,?n
f (?K (?t ), ?t ) ? f (?K (?t ), ?t ) .
(9)
t=1
t=1
In maximum coverage problems, the greedy policy ? is a good surrogate for the optimal policy ? ?
because it is a (1 ? 1/e)-approximation to ? ? (Section 2).
g
3
Algorithm 1 OASM: Optimistic adaptive submodular maximization.
Input: States ?1 , . . . , ?n
for all i ? I do Select item i and set p?i,1 to its state, Ti (0) ? 1 end for
. Initialization
for all t = 1, 2, . . . , n do
A??
for all k = 1, 2, . . . , K do
. K-step maximization
y ? ?t hAi(
)
A?A?
arg max (?
pi,Ti (t?1) + ct?1,Ti (t?1) )?
gi (y)
. Choose the highest index
i?I\A
end for
for all i ? I do Ti (t) ? Ti (t ? 1) end for
for all i ? A do
Ti (t) ? Ti (t) + 1
p?i,Ti (t) ? Ti1(t) (?
pi,Ti (t?1) Ti (t ? 1) + 21 (?t [i] + 1))
end for
end for
3.2
. Update statistics
Algorithm
Our algorithm is designed based on the optimism in the face of uncertainty principle, a strategy that
is at the core of many bandit algorithms [1, 8, 13]. More specifically, it is a greedy policy where the
expected gain gi (y) (Equation 7) is substituted for its optimistic estimate. The algorithm adaptively
maximizes a submodular function in an optimistic fashion and therefore we refer to it as Optimistic
Adaptive Submodular Maximization (OASM).
The pseudocode of our method is given in Algorithm 1. In each episode, we maximize the function
f in K steps. At each step, we compute the index (?
pi,Ti (t?1) + ct?1,Ti (t?1) )?
gi (y) of each item that
has not been selected yet and then choose the item with the highest index. The terms p?i,Ti (t?1) and
ct?1,Ti (t?1) are the maximum-likelihood estimate of the probability pi from the first t ? 1 episodes
and the radius of the confidence interval around this estimate, respectively. Formally:
r
s
1X1
2 log(t)
p?i,s =
(?? (i,z) [i] + 1),
ct,s =
,
(10)
s z=1 2
s
where s is the number of times that item i is chosen and ? (i, z) is the index of the episode in which
item i is chosen for the z-th time. In episode t, we set s to Ti (t ? 1), the number of times that item
i is selected in the first t ? 1 episodes. The radius ct,s is designed such that each index is with high
probability an upper bound on the corresponding gain. The index enforces exploration of items that
have not been chosen very often. As the number of past episodes increases, all confidence intervals
shrink and our method starts exploiting most profitable items. The log(t) term guarantees that each
item is explored infinitely often as t ? ?, to avoid linear regret.
Algorithm OASM has several notable properties. First, it is a greedy method. Therefore, our policies
can be computed very fast. Second, it is guaranteed to behave near optimally as our estimates of the
gain gi (y) become more accurate. We prove this claim in Section 3.3. Finally, our algorithm learns
only L parameters and therefore is quite practical. Specifically, note that if an item is chosen in one
context, it helps in refining the estimate of the gain gi (y) in all other contexts.
3.3
Analysis
In this section, we prove an upper bound on the expected cumulative regret of Algorithm OASM in n
episodes. Before we present the main result, we define notation used in our analysis. We denote by
i? (y) = ? g (y) the item chosen by the greedy policy ? g in context y. Without loss of generality, we
assume that this item is unique in all contexts. The hardness of discriminating between items i and
i? (y) is measured by a gap between the expected gains of the items:
?i (y) = gi? (y) (y) ? gi (y).
(11)
Our analysis is based on counting how many times the policies ? t and ? g choose a different item at
step k. Therefore, we define several variables that describe the state of our problem at this step. We
4
S
denote by Yk (?) = ? {?h?k?1 (?)i} the set of all possible observations after policy ? is executed
for k ? 1 steps. We write Yk = Yk (? g ) and Ykt = Yk (? t ) when we refer to the policies ? g and ? t ,
respectively. Finally, we denote by Yk,i = Yk ? {y : i 6= i? (y)} the set of contexts where item i is
suboptimal at step k.
Our main result is Theorem 1. Supplementary material for its proof is in Appendix. The terms item
and arm are treated as synonyms, and we use whichever is more appropriate in a given context.
Theorem 1. The expected cumulative regret of Algorithm OASM is bounded as:
L
K
K
X
X
X
2
R(n) ?
`i
Gk ?i,k + ? 2 L(L + 1)
Gk ,
3
i=1
k=1
k=1
{z
} |
{z
}
|
O(log n)
(12)
O(1)
where Gk = (K ? k + 1) max max gi (y) is an upper bound on the expected gain of the policy ? g
y?Yk i
g
?2 (y)
from step k forward, `i,k = 8 max ?i2 (y) log n is the number of pulls after which arm i is not
y?Yk,i
i
+
1
likely to be pulled suboptimally at step k, `i = max `i,k , and ?i,k =
`i,k ? max
`i,k0 ? [0, 1]
0
k
k <k
PK `i
is a weight that associates the regret of arm i to step k such that k=1 ?i,k = 1.
Proof. Our theorem is proved in three steps. First, we associate the regret in episode t with the first
step where our policy ? t selects a different item from the greedy policy ? g . For simplicity, suppose
that this step is step k. Then the regret in episode t can be written as:
g
t
Rt (?t ) = f (?K
(?t ), ?t ) ? f (?K
(?t ), ?t )
g
g
t
t
= f (?K
(?t ), ?t ) ? f (?k?1
(?t ), ?t ) ?[f (?K
(?t ), ?t ) ? f (?k?1
(?t ), ?t )],
{z
}
|
{z
} |
g
Fk?
(?t )
(13)
t (? )
Fk?
t
g
g
t
where the last equality is due to the assumption that ?[j]
(?t ) = ?[j]
(?t ) for all j < k; and Fk?
(?t )
t
g
t
and Fk? (?t ) are the gains of the policies ? and ? , respectively, in state ?t from step k forward.
In practice, the first step where the policies ? t and ? g choose a different item is unknown, because
? g is unknown. In this case, the regret can be written as:
Rt (?t ) =
K
L X
X
g
t
1i,k,t (?t )(Fk?
(?t ) ? Fk?
(?t )),
(14)
i=1 k=1
where:
n
o
g
g
t
t
t
1i,k,t (?) = 1 ?j < k : ?[j]
(?) = ?[j]
(?) , ?[k]
(?) 6= ?[k]
(?), ?[k]
(?) = i
(15)
is the indicator of the event that the policies ? t and ? g choose the same first k ? 1 items in state ?,
disagree in the k-th item, and i is the k-th item chosen by ? t . The commas in the indicator function
represent logical conjunction.
Second, in Lemma 1 we bound the expected loss associated with choosing the first different item at
step k by the probability of this event and an upper bound on the expected loss Gk , which does not
depend on ? t and ?t . Based on this result, we bound the expected cumulative regret as:
" n
#
" n L K
#
X
XXX
g
t
E?1 ,...,?n
Rt (?t ) = E?1 ,...,?n
1i,k,t (?t )(Fk? (?t ) ? Fk? (?t ))
t=1
t=1 i=1 k=1
=
L X
K X
n
X
g
t
E?1 ,...,?t?1 E?t 1i,k,t (?t )(Fk?
(?t ) ? Fk?
(?t ))
i=1 k=1 t=1
?
L X
K X
n
X
E?1 ,...,?t?1 [E?t [1i,k,t (?t )] Gk ]
i=1 k=1 t=1
=
L X
K
X
Gk E?1 ,...,?n
" n
X
t=1
i=1 k=1
5
#
1i,k,t (?t ) .
(16)
Finally, motivated by the analysis of UCB1 [1], we rewrite the indicator 1i,k,t (?t ) as:
1i,k,t (?t ) = 1i,k,t (?t )1{Ti (t ? 1) ? `i,k } + 1i,k,t (?t )1{Ti (t ? 1) > `i,k } ,
(17)
where `i,k is a problem-specific constant. In Lemma 4, we show how to choose `i,k such that arm i
at step k is pulled suboptimally a constant number of times in expectation after `i,k pulls. Based on
this result, the regret corresponding to the events 1{Ti (t ? 1) > `i,k } is bounded as:
" n
#
L X
K
K
X
X
X
2
Gk .
(18)
Gk E?1 ,...,?n
1i,k,t (?t )1{Ti (t ? 1) > `i,k } ? ? 2 L(L + 1)
3
t=1
i=1
k=1
k=1
On the other hand, the regret associated with the events 1{Ti (t ? 1) ? `i,k } is trivially bounded by
PL PK
i=1
k=1 Gk `i,k . A tighter upper bound is proved below:
#
"K
L
n
X
X
X
E?1 ,...,?n
Gk
1i,k,t (?t )1{Ti (t ? 1) ? `i,k }
i=1
?
i=1
?
t=1
k=1
L
X
"
max
?1 ,...,?n
L X
K
X
K
X
k=1
Gk
n
X
#
1i,k,t (?t )1{Ti (t ? 1) ? `i,k }
t=1
+
0
Gk `i,k ? max
`
.
i,k
0
i=1 k=1
k <k
(19)
The last inequality can be proved as follows. Our upper bound on the expected loss at step k, Gk , is
monotonically decreasing with k, and therefore G1 ? G2 ? . . . ? GK . So for any given arm i, the
highest cumulative regret subject to the constraint Ti (t ? 1) ? `i,k at step k is achieved as follows.
+
The first `i,1 mistakes are made at the first step, [`i,2 ? `i,1 ] mistakes are made at the second step,
+
[`i,3 ? max{`i,1 , `i,2 }] mistakes are made at the third step, and so on. Specifically, the number of
+
mistakes at step k is [`i,k ? maxk0 <k `i,k0 ] and the associated loss is Gk .
Our main claim follows from combining the upper bounds in Equations 18 and 19.
3.4
Discussion of Theoretical Results
Algorithm OASM mimics the greedy policy ? g . Therefore, we decided to prove Theorem 1 based on
counting how many times the policies ? t and ? g choose a different item. Our proof has three parts.
First, we associate the regret in episode t with the first step where the policy ? t chooses a different
item from ? g . Second, we bound the expected regret in each episode by the probability of deviating
from the policy ? g at step k and an upper bound on the associated loss Gk , which depends only on
k. Finally, we divide the expected cumulative regret into two terms, before and after item i at step k
is selected a sufficient number of times `i,k , and then set `i,k such that both terms are O(log n). We
would like to stress that our proof is relatively general. Our modeling assumptions (Section 3.1) are
leveraged only in Lemma 4. In the rest of the proof, we only assume that f is adaptive submodular
and adaptive monotonic.
Our regret bound has several notable properties. First, it is logarithmic in the number of episodes n,
through problem-specific constants `i,k . So we recover a classical result from the bandit literature.
Second, the bound is polynomial in all constants of interest, such as the number of items L and the
number of maximization steps K in each episode. We would like to stress that it is not linear in the
number of contexts YK at step K, which is exponential in K. Finally, note that our bound captures
the shape of the optimized function f . In particular, because the function f is adaptive submodular,
the upper bound on the gain of the policy ? g from step k forward, Gk , decreases as k increases. As
a result, earlier deviations from ? g are penalized more than later ones.
4
Experiments
Our algorithm is evaluated on a preference elicitation problem in a movie recommendation domain.
This problem is cast as asking K yes-or-no movie-genre questions. The users and their preferences
are extracted from the MovieLens dataset [11], a dataset of 6k users who rated one million movies.
6
30
Covered movies [%]
gi (0) g?i (0) P (?[i] = 1)
4.1% 13.0%
0.32
4.1%
9.2%
0.44
3.2%
6.6%
0.48
3.0%
8.0%
0.38
2.8% 23.0%
0.12
2.6%
6.0%
0.44
2.6%
5.8%
0.44
2.3% 19.6%
0.12
Genre
Crime
Children?s
Animation
Horror
Sci-Fi
Musical
Fantasy
Adventure
20
g
?
g
d
10
Deterministic ?
g
f
Factored ?
0
2
4
6
8
10 12 14
Number of questions K
16
18
Figure 1: Left. Eight movie genres that cover the largest number of movies in expectation. Right.
Comparison of three greedy policies for solving our preference elicitation problem. For each policy
and K ? L, we report the expected percentage of covered movies after K questions.
K=2
K=4
Covered movies [%]
10
K=8
15
25
8
20
10
6
15
4
5
10
2
0
1
10
2
10
3
4
10
10
Episode t
5
10
0
1
10
2
10
3
4
10
10
Episode t
5
10
5
1
10
2
10
3
4
10
10
Episode t
5
10
Figure 2: The expected return of the OASM policy ? t (cyan lines) in all episodes up to t = 105 . The
return is compared to those of the greedy policies ? g (blue lines), ?fg (red lines), and ?dg (gray lines)
in the offline setting (Figure 1) at the same operating point, the number of asked questions K.
We choose 500 most rated movies from the dataset. Each movie l is represented by a feature vector
xl such that xl [i] = 1 if the movie belongs to genre i and xl [i] = 0 if it does not. The preference of
user j for genre i is measured by tf-idf, a popular importance
score in information retrieval [12]. In
nu
particular, it is defined as tf-idf(j, i) = #(j, i) log #(?,i) , where #(j, i) is the number of movies
from genre i rated by user j, nu is the number of users, and #(?, i) is the number of users that rated
at least one movie from genre i. Intuitively, this score prefers genres that are often rated by the user
but rarely rated overall. Each user j is represented by a genre preference vector ? such that ?[i] = 1
when genre i is among five most favorite genres of the user. These genres cover on average 25% of
our movies. In Figure 1, we show several popular genres from our dataset.
The reward for asking user ? questions A is:
P500
f (A, ?) = 15 l=1 maxi [xl [i]1{?[i] = 1} 1{i ? A}] ,
(20)
the percentage of movies that belong to at least one genre i that is preferred by the user and queried
in A. The function f captures the notion that knowing more preferred genres is better than knowing
less. It is submodular in A for any given preference vector ?, and therefore adaptive submodular in
A when the preferences are distributed independently of each other (Equation 6). In this setting, the
expected value of f can be maximized near optimally by a greedy policy (Equation 4).
In the first experiment, we show that our assumption on P (?) (Equation 6) is not very restrictive in
our domain. We compare three greedy policies for maximizing f that know P (?) and differ in how
the expected gain of choosing items is estimated. The first policy ? g makes no assumption on P (?)
and computes the gain according to Equation 5. The second policy ?fg assumes that the distribution
P (?) is factored and computes the gain using Equation 7. Finally, the third policy ?dg computes the
gain according to Equation 8, essentially ignoring the stochasticity of our problem. All policies are
applied to all users in our dataset for all K ? L and their expected returns are reported in Figure 1.
We observe two trends. First, the policy ?fg usually outperforms the policy ?dg by a large margin. So
although our independence assumption may be incorrect, it is a better approximation than ignoring
7
the stochastic nature of the problem. Second, the expected return of ?fg is always within 84% of ? g .
We conclude that ?fg is a good approximation to ? g .
In the second experiment, we study how the OASM policy ? t improves over time. In each episode t,
we randomly choose a new user ?t and then the policy ? t asks K questions. The expected return of
? t is compared to two offline baselines, ?fg and ?dg . The policies ?fg and ?dg can be viewed as upper
and lower bounds on the expected return of ? t , respectively. Our results are shown in Figure 2. We
observe two major trends. First, ? t easily outperforms the baseline ?dg that ignores the stochasticity
of our problem. In two cases, this happens in less than ten episodes. Second, the expected return of
? t approaches that of ?fg , as is expected based on our analysis.
5
Related Work
Our paper is motivated by prior work in the areas of submodularity [14, 5] and bandits [1]. Similar
problems to ours were studied by several authors. For instance, Yue and Guestrin [17], and Guillory
and Bilmes [6], applied bandits to submodular problems in a non-adaptive setting. In our work, we
focus on the adaptive setting. This setting is more challenging because we learn a K-step policy for
choosing items, as opposing to a single set of items. Wen et al. [16] studied a variant of generalized
binary search, sequential Bayesian search, where the policy for asking questions is learned on-thefly by interacting with the environment. A major observation of Wen et al. [16] is that this problem
can be solved near optimally without exploring. As a result, its solution and analysis are completely
different from those in our paper.
Learning with trees was studied in machine learning in many settings, such as online learning with
tree experts [3]. This work is similar to ours only in trying to learn a tree. The notions of regret and
the assumptions on solved problems are completely different. Optimism in the face of uncertainty is
a popular approach to designing learning algorithms, and it was previously applied to more general
problems than ours, such as planning [13] and MDPs [8]. Both of these solutions are impractical in
our setting. The former assumes that the model of the world is known and the latter is computationally intractable.
6
Conclusions
This is the first work that studies adaptive submodular maximization in the setting where the model
of the world is initially unknown. We propose an efficient bandit algorithm for solving the problem
and prove that its expected cumulative regret increases logarithmically with time. Our work can be
viewed as reinforcement learning (RL) [15] for adaptive submodularity. The main difference in our
setting is that we can learn near-optimal policies without estimating the value function. Learning of
value functions is typically hard, even when the model of the problem is known. Fortunately, this is
not necessary in our problem and therefore we can develop a very efficient learning algorithm.
We assume that the states of items are distributed independently of each other. In our experiments,
this assumption was less restrictive than we expected (Section 4). Nevertheless, we believe that our
approach should be studied under less restrictive assumptions. In preference elicitation (Section 4),
for instance, the answers to questions are likely to be correlated due to many factors, such as user?s
preferences, user?s mood, and the similarity of the questions. Our current model cannot capture any
of these dependencies. However, we believe that our approach is quite general and can be extended
to more complex models. We think that any such generalization would comprise three major steps:
choosing a model of P (?), deriving a corresponding upper confidence bound on the expected gain,
and finally proving an equivalent of Lemma 4.
We also assume that the expected gain of choosing an item (Equation 7) can be written as a product
of some known gain function (Equation 8) and the probability of the item?s states. This assumption
is quite natural in maximum coverage problems but may not be appropriate in other problems, such
as generalized binary search [4].
Our upper bound on the expected regret at step k (Lemma 1) may be loose in practice because it is
obtained by maximizing over all contexts y ? Yk . In general, it is difficult to prove a tighter bound.
Such a bound would have to depend on the probability of making a mistake in a specific context at
step k, which depends on the policy in that episode, and indirectly on the progress of learning in all
earlier episodes. We leave this for future work.
8
References
[1] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine Learning, 47:235?256, 2002.
[2] Sandilya Bhamidipati, Branislav Kveton, and S. Muthukrishnan. Minimal interaction search:
Multi-way search with item categories. In Proceedings of AAAI Workshop on Intelligent Techniques for Web Personalization and Recommendation, 2013.
[3] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, 2006.
[4] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In Advances in Neural Information Processing Systems 17, pages 337?344, 2005.
[5] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 42:427?
486, 2011.
[6] Andrew Guillory and Jeff Bilmes. Online submodular set cover, ranking, and repeated active
learning. In Advances in Neural Information Processing Systems 24, pages 1107?1115, 2011.
[7] Andrew Guillory and Jeff Bilmes. Simultaneous learning and covering with adversarial noise.
In Proceedings of the 28th International Conference on Machine Learning, pages 369?376,
2011.
[8] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 11:1563?1600, 2010.
? Tardos. Maximizing the spread of influence through a
[9] David Kempe, Jon Kleinberg, and Eva
social network. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 137?146, 2003.
[10] Andreas Krause, Ajit Paul Singh, and Carlos Guestrin. Near-optimal sensor placements in
Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine
Learning Research, 9:235?284, 2008.
[11] Shyong Lam and Jon Herlocker. MovieLens 1M Dataset. http://www.grouplens.org/node/12,
2012.
[12] Christopher Manning, Prabhakar Raghavan, and Hinrich Sch?utze. Introduction to Information
Retrieval. Cambridge University Press, New York, NY, 2008.
[13] R?emi Munos. The optimistic principle applied to games, optimization, and planning: Towards
foundations of Monte-Carlo tree search. Foundations and Trends in Machine Learning, 2012.
[14] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions - I. Mathematical Programming, 14(1):265?294, 1978.
[15] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press,
Cambridge, MA, 1998.
[16] Zheng Wen, Branislav Kveton, Brian Eriksson, and Sandilya Bhamidipati. Sequential Bayesian
search. In Proceedings of the 30th International Conference on Machine Learning, pages 977?
983, 2013.
[17] Yisong Yue and Carlos Guestrin. Linear submodular bandits and their application to diversified
retrieval. In Advances in Neural Information Processing Systems 24, pages 2483?2491, 2011.
9
| 4955 |@word exploitation:1 polynomial:1 asks:1 recursively:1 score:2 daniel:1 ours:3 past:2 outperforms:2 current:1 com:2 yet:1 written:3 ronald:1 shape:1 designed:2 update:1 intelligence:3 selected:5 greedy:16 item:70 core:1 node:1 preference:11 org:1 five:1 mathematical:1 become:2 incorrect:1 prove:7 introduce:1 hardness:1 expected:42 planning:2 multi:1 decreasing:1 estimating:2 notation:2 bounded:3 alto:2 maximizes:2 hinrich:1 kg:1 fantasy:1 impractical:2 guarantee:1 ti:23 before:2 engineering:1 mistake:7 sutton:1 lugosi:1 inria:2 initialization:1 studied:6 challenging:1 decided:1 practical:3 unique:1 enforces:1 kveton:4 practice:4 regret:24 implement:1 area:2 empirical:1 gabor:1 confidence:4 word:2 eriksson:3 cannot:2 context:13 influence:1 impossible:1 www:1 branislav:4 deterministic:1 equivalent:1 maximizing:5 independently:3 simplicity:3 factored:2 deriving:1 pull:2 proving:1 notion:3 traditionally:2 profitable:1 tardos:1 suppose:1 user:15 programming:1 designing:1 associate:3 logarithmically:3 trend:3 maxk0:1 observed:5 electrical:1 capture:6 solved:2 eva:1 zhengwen:1 episode:25 ordering:1 trade:1 highest:5 decrease:1 yk:10 environment:2 reward:2 asked:1 dom:12 depend:2 solving:3 rewrite:2 singh:1 completely:2 easily:1 joint:2 k0:2 represented:2 genre:15 muthukrishnan:2 fast:1 describe:1 monte:1 artificial:3 choosing:13 outcome:1 whose:1 quite:5 stanford:2 supplementary:1 say:2 relax:1 statistic:1 gi:13 g1:1 fischer:1 think:1 online:2 obviously:1 mood:1 propose:4 lam:1 interaction:3 product:1 fr:1 combining:1 realization:1 horror:1 validate:1 exploiting:1 prabhakar:1 leave:1 help:1 illustrate:1 develop:1 andrew:3 measured:3 progress:1 coverage:3 c:1 differ:2 submodularity:5 radius:2 stochastic:2 exploration:2 raghavan:1 material:1 require:1 generalization:1 villeneuve:1 brian:3 tighter:2 exploring:1 pl:1 around:1 claim:3 major:5 utze:1 palo:2 grouplens:1 largest:1 tf:2 mit:1 clearly:1 sensor:7 always:3 gaussian:1 avoid:1 barto:1 conjunction:1 focus:1 refining:1 bernoulli:2 likelihood:1 greatly:1 adversarial:1 sigkdd:1 baseline:2 typically:1 initially:3 diminishing:1 bandit:11 france:1 selects:3 arg:3 overall:1 among:1 fairly:1 kempe:1 marginal:1 comprise:1 lille:1 jon:2 mimic:1 future:1 np:1 report:1 intelligent:1 inherent:2 few:2 wen:4 ortner:1 randomly:1 richard:1 dg:6 individual:1 deviating:1 opposing:1 interest:2 possibility:1 mining:1 zheng:2 personalization:1 accurate:1 partial:1 necessary:2 conduct:1 tree:4 divide:1 theoretical:1 minimal:1 instance:5 earlier:4 modeling:1 asking:3 cover:3 maximization:19 deviation:1 subset:1 entry:4 hundred:2 optimally:3 reported:1 dependency:1 answer:1 guillory:3 chooses:1 adaptively:3 international:3 discriminating:1 sequel:1 off:1 together:1 gabillon:2 aaai:1 cesa:2 yisong:1 choose:11 leveraged:1 expert:1 return:12 notable:2 ranking:1 depends:3 later:3 lab:2 optimistic:7 observing:1 red:1 start:1 recover:1 carlos:2 contribution:1 musical:1 who:1 efficiently:2 maximized:2 identify:1 yes:1 bayesian:2 carlo:1 bilmes:3 simultaneous:1 definition:2 failure:2 proof:5 associated:4 gain:28 newly:1 proved:3 dataset:6 popular:3 logical:1 knowledge:1 improves:1 auer:2 xxx:1 yb:3 evaluated:1 shrink:1 generality:1 just:2 correlation:1 hand:1 web:1 christopher:1 quality:1 gray:1 believe:2 concept:1 y2:5 true:1 former:1 equality:1 jaksch:1 i2:1 game:3 covering:1 generalized:2 trying:1 stress:2 bring:1 adventure:1 fi:1 common:1 pseudocode:1 rl:1 million:1 belong:1 refer:3 multiarmed:1 cambridge:3 queried:1 fk:10 trivially:1 stochasticity:2 submodular:29 similarity:1 operating:1 nicolo:2 belongs:1 scenario:1 inequality:1 binary:2 victor:2 guestrin:3 additional:1 fortunately:1 maximize:3 monotonically:1 reduces:1 retrieval:3 prediction:1 variant:3 essentially:1 expectation:2 rutgers:2 represent:2 achieved:1 krause:4 interval:2 sch:1 rest:2 yue:2 subject:1 structural:1 near:7 counting:2 revealed:1 easy:1 independence:3 affect:1 suboptimal:1 andreas:2 knowing:3 ti1:1 motivated:2 optimism:3 peter:2 york:2 repeatedly:3 prefers:1 generally:1 covered:4 ten:1 category:1 http:1 exist:2 percentage:2 estimated:1 blue:1 write:3 dasgupta:1 four:1 terminology:1 nevertheless:1 drawn:2 defect:1 uncertainty:3 reasonable:1 decision:1 appendix:2 bound:24 ct:5 cyan:1 guaranteed:1 placement:2 constraint:2 idf:2 generates:1 kleinberg:1 emi:1 argument:2 relatively:1 department:2 according:4 manning:1 y0:4 making:1 happens:1 intuitively:1 computationally:2 equation:12 previously:3 discus:2 loose:1 fail:1 technicolor:4 needed:1 know:1 whichever:1 end:5 apply:1 observe:3 eight:1 appropriate:2 indirectly:1 thomas:1 assumes:2 restrictive:4 classical:1 objective:2 question:10 quantity:1 strategy:2 costly:2 rt:4 surrogate:1 hai:2 nemhauser:1 sci:1 argue:1 trivial:2 reason:1 suboptimally:2 modeled:1 index:6 difficult:1 executed:1 gk:17 negative:1 herlocker:1 policy:57 unknown:9 bianchi:2 recommender:1 upper:13 observation:16 disagree:1 finite:1 behave:1 extended:1 team:1 y1:5 interacting:4 ajit:1 david:1 cast:2 optimized:2 crime:1 learned:6 nu:2 elicitation:5 below:1 usually:1 max:11 power:1 event:4 natural:3 treated:1 indicator:3 arm:5 movie:15 rated:6 mdps:1 ascq:1 prior:1 literature:1 discovery:1 loss:6 wolsey:1 foundation:2 sufficient:1 principle:3 pi:8 penalized:1 last:2 offline:2 pulled:2 wide:2 face:3 munos:1 fg:8 benefit:1 distributed:3 world:6 cumulative:10 computes:3 ignores:1 forward:3 collection:1 adaptive:29 made:3 author:1 reinforcement:3 social:2 preferred:2 active:3 assumed:1 conclude:1 alternatively:1 comma:1 search:7 favorite:1 nature:1 learn:7 p500:1 ca:2 golovin:3 ignoring:2 complex:1 domain:2 substituted:1 pk:2 main:5 spread:1 synonym:1 noise:1 animation:1 paul:2 child:1 repeated:1 x1:1 fashion:1 deployed:1 ny:2 comprises:1 exponential:3 xl:4 third:3 sandilya:2 learns:1 theorem:4 specific:5 maxi:1 explored:1 thefly:1 intractable:1 workshop:1 sequential:2 importance:1 conditioned:1 margin:1 gap:1 ucb1:1 logarithmic:1 likely:2 infinitely:1 diversified:1 ykt:1 partially:2 g2:1 recommendation:2 monotonic:4 satisfies:1 extracted:1 acm:1 ma:1 goal:2 viewed:4 exposition:2 manufacturing:1 towards:1 jeff:2 fisher:1 hard:2 specifically:4 movielens:2 lemma:5 sanjoy:1 ya:6 rarely:1 formally:2 select:1 latter:1 evaluate:1 phenomenon:1 correlated:1 |
4,371 | 4,956 | Auditing: Active Learning with
Outcome-Dependent Query Costs
Sivan Sabato
Microsoft Research New England
[email protected]
Anand D. Sarwate
TTI-Chicago
[email protected]
Nathan Srebro
Technion-Israel Institute of Technology and TTI-Chicago
[email protected]
Abstract
We propose a learning setting in which unlabeled data is free, and the cost of a
label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels.
Our motivation are applications such as fraud detection, in which investigating
an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative
labels the algorithm requires in order to learn a hypothesis with low relative error. We design auditing algorithms for simple hypothesis classes (thresholds and
rectangles), and show that with these algorithms, the auditing complexity can be
significantly lower than the active label complexity. We also show a general competitive approach for learning with outcome-dependent costs.
1
Introduction
Active learning algorithms seek to mitigate the cost of learning by using unlabeled data and sequentially selecting examples to query for their label to minimize total number of queries. In some cases,
however, the actual cost of each query depends on the true label of the example and is thus not known
before the label is requested. For instance, in detecting fraudulent credit transactions, a query with
a positive answer is not wasteful, whereas a negative answer is the result of a wasteful investigation
of an honest transaction, and perhaps a loss of good-will. More generally, in a multiclass setting,
different queries may entail different costs, depending on the outcome of the query. In this work we
focus on the binary case, and on the extreme version of the problem, as described in the example of
credit fraud, in which the algorithm only pays for queries which return a negative label. We term
this setting auditing, and the cost incurred by the algorithm its auditing complexity.
There are several natural ways to measure performance for auditing. For example, we may wish
the algorithm to maximize the number of positive labels it finds for a fixed ?budget? of negative
labels, or to minimize the number of negative labels while finding a certain number or fraction of
positive labels. In this work we focus on the classical learning problem, in which one attempts to
learn a classifier from a fixed hypothesis class, with an error close to the best possible. Similar to
active learning, we assume we are given a large set of unlabeled examples, and aim to learn with
minimal labeling cost. But unlike active learning, we only incur a cost when requesting the label of
an example that turns out to be negative.
The close relationship between auditing and active learning raises natural questions. Can the auditing complexity be significantly better than the label complexity in active learning? If so, should
1
algorithms be optimized for auditing, or do optimal active learning algorithms also have low auditing complexity? To answer these questions, and demonstrate the differences between active learning
and auditing, we study the simple hypothesis classes of thresholds and of axis-aligned rectangles in
Rd , in both the realizable and the agnostic settings. We then also consider a general competitive
analysis for arbitrary hypothesis classes.
Other work. Existing work on active learning with costs (Margineantu, 2007; Kapoor et al., 2007;
Settles et al., 2008; Golovin and Krause, 2011) typically assumes that the cost of labeling each
point is known a priori, so the algorithm can use the costs directly to select a query. Our model is
significantly different, as the costs depend on the outcome of the query itself. Kapoor et al. (2007)
do mention the possibility of class-dependent costs, but this possibility is not studied in detail. An
unrelated game-theoretic learning model addressing ?auditing? was proposed by Blocki et al. (2011).
Notation and Setup
For an integer m, let [m] = {1, 2, . . . , m}. The function I[A] is the indicator function of a set A.
For a function f and a sub-domain X, f |X is the restriction of f to X. For vectors a and b in Rd ,
the inequality a ? b implies ai ? bi for all i ? [d].
We assume a data domain X and a distribution D over labeled data points in X ? {?1, +1}. A
learning algorithm may sample i.i.d. pairs (X, Y ) ? D. It then has access to the value of X, but the
? : X ? {?1, +1}.
label Y remains hidden until queried. The algorithm returns a labeling function h
The error of a function h : X ? {?1, +1} on D is err(D, h) = E(X,Y )?D [h(X) 6= Y ]. The error
P
1
of h on a multiset S ? X ? {?1, +1} is given by err(S, h) = |S|
(x,y)?S I[h(x) 6= y]. The
passive sample complexity of an algorithm is the number of pairs it draws from D. The active label
complexity of an algorithm is the total number of label queries the algorithm makes. Its auditing
complexity is the number of queries the algorithm makes on points with negative labels.
We consider guarantees for learning algorithms relative to a hypothesis class H ? {?1, +1}X . We
denote the error of the best hypothesis in H on D by err(D, H) = minh?H err(D, h). Similarly,
err(S, H) = minh?H err(S, h). We usually denote the best error for D by ? = err(D, H).
To describe our algorithms it will be convenient to define the following sample sizes, using universal
constants C, c > 0. Let ? ? (0, 1) be a confidence parameter, and let ? (0, 1) be an error parameter.
Let mag (, ?, d) = C(d + ln(c/?))/2 . If a sample S is drawn from D with |S| = mag (, ?, d) then
with probability 1 ? ?, ?h ? H, err(D, h) ? err(S, h) + and err(S, H) ? err(D, H) + (Bartlett
and Mendelson, 2002). Let m? (, ?, d) = C(d ln(c/?) + ln(c/?))/? 2 . Results of Vapnik and
Chervonenkis (1971) show that if H has VC dimension d and S is drawn from D with |S| = m? ,
then for all h ? H,
err(S, h) ? max {err(D, h)(1 + ?), err(D, h) + ?} and
err(D, h) ? max {err(S, h)(1 + ?), err(S, h) + ?} .
2
(1)
Active Learning vs. Auditing: Summary of Results
The main point of this paper is that the auditing complexity can be quite different from the active
label complexity, and that algorithms tuned to minimizing the audit label complexity give improvements over standard active learning algorithms. Before presenting these differences, we note that in
some regimes, neither active learning nor auditing can improve significantly over the passive sample
complexity. In particular, a simple adaptation of a result of Beygelzimer et al. (2009), establishes
the following lower bound.
Lemma 2.1. Let H be a hypothesis class with VC dimension d > 1. If an algorithm always finds a
? with err(D, h)
? ? err(D, H)+ for > 0, then for any ? ? (0, 1) there is a distribution
hypothesis h
D with ? = err(D, H) such that the auditing complexity of this algorithm for D is ?(d? 2 /2 ).
That is, when ? is fixed while ? 0, the auditing complexity scales as ?(d/2 ), similar to the
passive sample complexity. Therefore the two situations which are interesting are the realizable
2
case, corresponding to ? = 0, and the agnostic case, when we want to guarantee an excess error
such that ?/ is bounded. We provide results for both of these regimes.
We will first consider the realizable case, when ? = 0. Here it is sufficient to consider the case
? such that
where a fixed pool S of m points is given and the algorithm must return a hypothesis h
?
err(S, h) = 0 with probability 1. A pool labeling algorithm can be used to learn a hypothesis
which is good for a distribution by drawing and labeling a large enough pool. We define auditing
complexity for an unlabeled pool as the minimal number of negative labels needed to perfectly
classify it. It is easy to see that there are pools with an auditing complexity at least the VC dimension
of the hypothesis class.
For the agnostic case, when ? > 0, we denote ? = /? and say that an algorithm (?, ?)-learns a
? returned by
class of distributions D with respect to H if for all D ? D, with probability 1 ? ?, h
? ? (1 + ?)?. By Lemma 2.1 an auditing complexity of ?(d/?2 )
the algorithm satisfies err(D, h)
is unavoidable, but we can hope to improve over the passive sample complexity lower bound of
?(d/??2 ) (Devroye and Lugosi, 1995) by avoiding the dependence on ?.
Our main results are summarized in Table 1, which shows the auditing and active learning complexities in the two regimes, for thresholds on [0, 1] and axis-aligned rectangles in Rd , where we assume
that the hypotheses label the points in the rectangle as negative and points outside as positive.
Thresholds
Realizable
Rectangles
Agnostic
Thresholds
Rectangles
Active
?(ln m)
m
? ln ?1 + ?12
? d ?1 + ?12
Auditing
1
2d
O ?12
O d2 ln2 ?1 ? ?12 ln
1
?
Table 1: Auditing complexity upper bounds vs. active label complexity lower bounds for realizable
(pool size m) and agnostic (err(D, H) = ?) cases. Agnostic bounds are for (?, ?)-learning with a
fixed ?, where ? = /?.
In the realizable case, for thresholds, the optimal active learning algorithm performs binary search,
resulting in ?(ln m) labels in the worst case. This is a significant improvement over the passive label
complexity of m. However, a simple auditing procedure that scans from right to left queries only
a single negative point, achieving an auditing complexity of 1. For rectangles, we present a simple
coordinate-wise scanning procedure with auditing complexity of at most 2d, demonstrating a huge
gap versus active learning, where the labels of all m points might be required. Not all classes enjoy
reduced auditing complexity: we also show that for rectangles with positive points on the inside,
there exists pools of size m with an auditing complexity of m.
In the agnostic case we wish to (?, ?)-learn distributions with a true error of ? = err(D, H), for
constant ?, ?. For active learning, it has been shown that in some cases, the ?(d/?) passive sample
complexity can be replaced by an exponentially smaller O(d ln(1/?)) active label complexity (Hanneke, 2011), albeit sometimes with a larger polynomial dependence on d. In other cases, an ?(1/?)
dependence exists also for active learning. Our main question is whether the dependence on ? in the
active label complexity can be further reduced for auditing.
For thresholds, active learning requires ?(ln(1/?)) labels (Kulkarni et al., 1993). Using auditing,
we show that the dependence on ? can be completely removed, for any true error level ? > 0, if
we know ? in advance. We also show that if ? is not known at least approximately, the logarithmic
dependence on 1/? is unavoidable also for auditing. For rectangles, we show that the active label
complexity is at least ?(d/?). In contrast, we propose an algorithm with an auditing complexity
of O(d2 ln2 (1/?)), reducing the linear dependence on 1/? to a logarithmic dependence. We do not
know whether a linear dependence on d is possible with a logarithmic dependence on 1/?.
Omitted proofs of results below are provided in the extended version of this paper (Sabato et al.,
2013).
3
3
Auditing for Thresholds on the Line
The first question to ask is whether the audit label complexity can ever be significantly smaller than
the active or passive label complexities, and whether a different algorithm is required to achieve this
improvement. The following simple case answers both questions in the affirmative. Consider the
hypothesis class of thresholds on the line, defined over the domain X = [0, 1]. A hypothesis with
threshold a is ha (x) = I[x ? a ? 0]. The hypothesis class is Ha = {ha | a ? [0, 1]}. Consider
the pool setting for the realizable case. The optimal active label complexity of ?(log2 m) can be
achieved by a binary search on the pool. The auditing complexity of this algorithm can also be as
large as ?(log2 (m)). However, auditing allows us to beat this barrier. This case exemplifies an interesting contrast between auditing and active learning. Due to information-theoretic considerations,
any algorithm which learns an unlabeled pool S has an active label complexity of at least log2 |H|S |
(Kulkarni et al., 1993), where H|S is the set of restrictions of functions in H to the domain S. For
Ha , log2 |Ha |S | = ?(log2 m). However, the same considerations are invalid for auditing.
We showed that for the realizable case, the auditing label complexity for Ha is a constant. We now
provide a more complex algorithm that guarantees this for (?, ?)-learning in the agnostic case. The
intuition behind our approach is that to get the optimal threshold in a pool with at most k errors, we
can query from highest to lowest until observing k + 1 negative points and then find the minimal
error threshold on the labeled points.
Lemma 3.1. Let S be a pool of size m in [0, 1], and assume that err(S, Ha ) ? k/m. Then the
? such that err(S, h)
? = err(S, Ha ) with an auditing complexity of k + 1.
procedure above finds h
Proof. Denote the last queried point by x0 , and let ha? = argminh?Ha err(S, Ha ). Since
err(S, ha? ) ? k/m, a? > x0 . Denote by S 0 ? S the set of points queried by the procedure.
For any a > x0 , err(S 0 , ha ) = err(S, ha ) + |{(x, y) ? S | x < x0 , y = 1}|/m. Therefore,
minimizing the error on S 0 results in a hypothesis that minimizes the error on S.
To learn from a distribution, one can draw a random sample and use it as the pool in the procedure
above. However, the sample size required for passive (?, ?)-learning of thresholds is ?(ln(1/?)/?).
Thus, the number of errors in the pool would be k = ???(ln(1/?)/?) = ?(ln(1/?)), which depends
on ?. To avoid this dependence, the auditing algorithm we propose uses Alg. 1 below to select a
subset of the random sample, which still represents the distribution well, but its size is only ?(1/?).
Lemma 3.2. Let ?, ?max ? (0, 1). Let S be a pool such that err(S, Ha ) ? ?max . Let Sq be the
? = argmin
output of Alg. 1 with inputs S, ?max , ?, and let h
h?Ha err(Sq , Ha ). Then with probability
1 ? ?,
? ? 6?max and err(S, h)
? ? 17?max .
err(Sq , h)
The algorithm for auditing thresholds on the line in the agnostic case is listed in Alg. 2. This
algorithm first achieves (C, ?) learning of Ha for a fixed C (in step 7, based on Lemma 3.2 and
Lemma 3.1, and then improves its accuracy to achieve (?, ?)-learning for ? > 0, by additional
passive sampling in a restricted region. The following theorem provides the guarantees for Alg. 2.
Algorithm 1: Representative Subset Selection
1: Input: pool S = (x1 , . . . , xm ) (with hidden labels), xi ? [0, 1], ?max ? (0, 1], ? ? (0, 1).
2: T ? max{b1/3?max c, 1}.
3: Let U = {x1 , . . . , x1 , . . . , xm , . . . , xm } be the multiset with T copies of each point in S.
| {z }
|
{z
}
T copies
T copies
4: Sort and rename the points in U such that x0i ? x0i+1 for all i ? [T m].
5: Let Sq be an empty multiset.
6: for t = 1 to T do
7:
S(t) ? {x0(t?1)m+1 , . . . , x0tm }.
8:
Draw 14 ln(8/?) random points from S(t) independently uniformly at random and add them
to Sq (with duplications).
9: end for
10: Return Sq (with the corresponding hidden labels).
4
Algorithm 2: Auditing for Thresholds with a constant ?
1: Input: ?max , ?, ? ? (0, 1), access to distribution D such that err(D, Ha ) ? ?max .
2: ? ? ?/5.
3: Draw a random labeled pool (with hidden labels) S0 of size m? (?, ?/2, 1) from D.
4: Draw a random sample S of size mag ((1 + ?)?max , ?/2, 1) uniformly from S0 .
5: Get a subset Sq using Alg. 1 with inputs S, 2(1 + ?)?max , ?/2.
6: Query points in Sq from highest to lowest. Stop after d12|Sq |(1 + ?)?max e + 1 negatives.
? such that ha? minimizes the error on the labeled part of Sq .
7: Find a
? in S from each side of a
?.
8: Let S1 be the set of the 36(1 + ?)?max |S0 | closest points to a
9: Draw S2 of size mag (?/72, ?/2, 1) from S1 (see definition on page 2).
? that minimizes the error on S2 .
10: Query all points in S2 , and return h
Theorem 3.3. Let ?max , ?, ? ? (0, 1). Let D be a distribution with error err(D, Ha ) ? ?max .
? such that
Alg. 2 with input ?max , ?, ? has an auditing complexity of O(ln(1/?)/?2 ), and returns h
? ? (1 + ?)?max .
with probability 1 ? ?, err(D, h)
It immediately follows that if ? = err(D, H) is known, (?, ?)-learning is achievable with an auditing
complexity that does not depend on ?. This is formulated in the following corollary.
Corollary 3.4 ((?, ?)-learning for Ha ). Let ?, ?, ? ? (0, 1]. For any distribution D with error
err(D, Ha ) = ?, Alg. 2 with inputs ?max = ?, ?, ? (?, ?)-learns D with respect to Ha with an
auditing complexity of O(ln(1/?)/?2 ).
A similar result holds if the error is known up to a multiplicative constant. But what if no bound
on ? is known? The following lower bound shows that in this case, the best active complexity for
threshold this similar to the best active label complexity.
Theorem 3.5 (Lower bound on auditing Ha without ?max ). Consider any constant ? ? 0. For any
? ? (0, 1), if an auditing algorithm (?, ?)-learns any distribution D such that err(D, Ha ) ? ?min ,
then the algorithm?s auditing complexity is ?(ln( 1??
? ) ln(1/?min )).
In the next section show that there are classes with a significant gap between active and auditing
complexities even without an upper bound on the error.
4
Axis Aligned Rectangles
A natural extension of thresholds to higher dimension is the class of axis-aligned rectangles, in which
the labels are determined by a d-dimensional hyperrectangle. This hypothesis class, first introduced
in Blumer et al. (1989), has been studied extensively in different regimes (Kearns, 1998; Long and
Tan, 1998), including active learning (Hanneke, 2007b). An axis-aligned-rectangle hypothesis is a
disjunction of 2d thresholds. For simplicity of presentation, we consider here the slightly simpler
class of disjunctions of d thresholds over the positive orthant Rd+ . It is easy to reduce learning of an
axis-aligned rectangle in Rd to learning of a disjunction of thresholds in R2d by mapping each point
? ? R2d such that for i ? [d], x
x ? Rd to a point x
?[i] = max(x[i], 0) and x
?[i + d] = max(0, ?x[i])).
Thus learning the class of disjunctions is equivalent, up to a factor of two in the dimensionality, to
learning rectangles1 . Because auditing costs are asymmetric, we consider two possibilities for label
assignment. For a vector a = (a[1], . . . , a[d]) ? Rd+ , define the hypotheses ha and h?
a by
ha (x) = 2I[?i ? [d], x[i] ? a[i]] ? 1,
and
h?
a (x) = ?ha (x).
?
d
Define H2 = {ha | a ? Rd+ } and H2
= {h?
a | a ? R+ }. In H2 the positive points are outside
?
the rectangle and in H2 the negatives are outside. Both classes have VC dimension d. All of our
results for these classes can be easily extended to the corresponding classes of general axis-aligned
rectangles on Rd , with at most a factor of two penalty on the auditing complexity.
1
This reduction suffices if the origin is known to be in the rectangle. Our algorithms and results can all be
extended to the case where rectangles are not required to include the origin. To keep the algorithm and analysis
as simple as possible, we state the result for this special case.
5
4.1
The Realizable Case
We first consider the pool setting for the realizable case, and show a sharp contrast between the
?
auditing complexity and the active label complexity for H2 and H2
. Assume a pool of size m.
?
While the active learning complexity for H2 and H2 can be as large as m, the auditing complexities
?
for the two classes are quite different. For H2
, the auditing complexity can be as large as m, but for
H2 it is at most d. We start by showing the upper bound for auditing of H2 .
Theorem 4.1 (Pool auditing upper bound for H2 ). The auditing complexity of any unlabeled pool
Su of size m with respect to H2 is at most d.
Proof. The method is a generalization of the approach to auditing for thresholds. Let h? ? H2 such
that err(S, h? ) = 0. For each i ? [d], order the points x in S by the values of their i-th coordinates
x[i]. Query the points sequentially from largest value to the smallest (breaking ties arbitrarily) and
stop when the first negative label is returned, for some point xi . Set a[i] ? xi [i], and note that h?
? = ha . This procedure clearly
labels all points in {x | x[i] > a[i]} positive. Return the hypothesis h
queries at most d negative points and agrees with the labeling of h? .
It is easy to see that a similar approach yields an auditing complexity of 2d for full axis-aligned
?
that immediately
rectangles. We now provide a lower bound for the auditing complexity of H2
?
implies the same lower bound for active label complexity of H2 and H2 .
?
Theorem 4.2 (Pool auditing lower bound for H2
). For any m and any d ? 2, there is a pool
?
Su ? Rd+ of size m such that its auditing complexity with respect to H2
is m.
Proof. The construction is a simple adaptation of a construction due to Dasgupta (2005), originally
showing an active learning lower bound for the class of hyperplanes. Let the pool be composed of m
distinct points on the intersection of the unit circle and the positive orthant: Su = {(cos ?j , sin ?j )}
for distinct ?j ? [0, ?/2]. Any labeling which labels all the points in Su negative except any one
?
, and so is the all-negative labeling. Thus, any algorithm that distinguishes
point is realizable for H2
between these different labelings with probability 1 must query all the negative labels.
?
?
, there is a pool
). For H2 and H2
Corollary 4.3 (Realizable active label complexity of H2 and H2
of size m such that its active label complexity is m.
4.2
The Agnostic Case
We now consider H2 in the agnostic case, where ? > 0. The best known algorithm for active learning of rectangles (2, ?)-learns a very restricted class of distributions (continuous product
distributions which are sufficiently balanced in all directions) with an active label complexity of
? 3 p(ln(1/?)p(ln(1/?))), where p(?) is a polynomial (Hanneke, 2007b). However, for a general
O(d
distribution, active label complexity cannot be significantly better than passive label complexity.
This is formalized in the following theorem.
Theorem 4.4 (Agnostic active label complexity of H2 ). Let ?, ? > 0, ? ? (0, 12 ). Any learning
algorithm that (?, ?)-learns all distributions such that err(D, H) = ? for ? > 0 with respect to H2
has an active label complexity of ?(d/?).
In contrast, the auditing complexity of H2 can be much smaller, as we show for Alg. 3 below.
Theorem 4.5 (Auditing complexity of H2 ). For ?min , ?, ? ? (0, 1), there is an algorithm that
(?, ?)-learns all distributions with ? ? ?min with respect to H2 with an auditing complexity of
2
O( d ln(1/??)
ln2 (1/?min )).
?2
If ?min is polynomially close to the true ?, we get an auditing complexity of O(d2 ln2 (1/?)), compared to the active label complexity of ?(d/?), an exponential improvement in ?. It is an open
question whether the quadratic dependence on d is necessary here.
Alg. 3 implements a ?low-confidence? version of the realizable algorithm. It sequentially queries
points in each direction, until enough negative points have been observed to make sure the threshold in this direction has been overstepped. To bound the number of negative labels, the algorithm
iteratively refines lower bounds on the locations of the best thresholds, and an upper bound on the
negative error, defined as the probability that a point from D with negative label is classified as
6
positive by a minimal-error classifier. The algorithm uses queries that mostly result in positive labels, and stops when the upper bound on the negative error cannot be refined. The idea of iteratively
refining a set of possible hypotheses has been used in a long line of active learning works (Cohn
et al., 1994; Balcan et al., 2006; Hanneke, 2007a; Dasgupta et al., 2008). Here we refine in a particular way that uses the structure of H2 , and allows bounding the number of negative examples we
observe.
We use the following notation in Alg. 3. The negative error of a hypothesis is errneg (D, h) =
P(X,Y )?D [h(X) = 1 and Y = ?1]. It is easy to see that the same convergence guarantees that
hold for err(?, ?) using a sample size m? (, ?, d) hold also for the negative error errneg (?, ?) (see
Sabato et al., 2013). For a labeled set of points S, an ? (0, 1) and a hypothesis class H, denote
V? (S, , H) = {h ? H | err(S, h) ? err(S, H) + (2? + ? 2 ) ? max(err(S, H), )}. For a vector
b ? Rd+ , define H2 [b] = {ha ? H2 | a ? b}.
Algorithm 3: Auditing for H2
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
Input: ?min > 0, ? ? (0, 1], access to distribution D over Rd+ ? {?1, +1}.
? ? ?/25.
for t = 0 to blog2 (1/?min )c do
?t ? 2?t .
Draw a sample St of size m? (?t , ?/ log2 (1/?min ), 10d) with hidden labels.
for i = 1 to d do
j?0
while j ? d(1 + ?)?t |St |e + 1 do
If unqueried points exist, query the unqueried point with highest i?th coordinate;
If query returned ?1, j ? j + 1.
end while
bt [i] ? the i?th coordinate of the last queried point, or 0 if all points were queried.
end for
Set Sbt to St , with unqueried labels set to ?1.
Vt ? V? (Sbt , ?t , H2 [bt ]).
??t ? maxh?Vt errneg (Sbt , h).
if ??t > ?t /4 then
Skip to step 21
end if
end for
? ? argmin
Return h
h?H2 [bt ] err(Sbt , h).
Theorem 4.5 is proven in Sabato et al. (2013). . The proof idea is to show that at each round t,
Vt includes any h? ? argminh?H err(D, h), and ??t is an upper bound on errneg (D, h? ). Further,
at any given point minimizing the error on Sbt is equivalent to minimizing the error on the entire
(unlabeled) sample. We conclude that the algorithm obtains a good approximation of the total error.
Its auditing complexity is bounded since it queries a bounded number of negative points at each
round.
5
Outcome-dependent Costs for a General Hypothesis Class
In this section we return to the realizable pool setting and consider finite hypothesis classes H. We
address general outcome-dependent costs and a general space of labels Y, so that H ? Y X . Let
S ? X be an unlabeled pool, and let cost : S ? H ? R+ denote the cost of a query: For x ? S
and h ? H, cost(x, h) is the cost of querying the label of x given that h is the true (unknown)
hypothesis. In the auditing setting, Y = {?1, +1} and cost(x, h) = I[h(x) = ?1]. For active
learning, cost ? 1. Note that under this definition of cost function, the algorithm may not know the
cost of the query until it reveals the true hypothesis.
Define OPTcost (S) to be the minimal cost of an algorithm that for any labeling of S which is
? such that err(S, h)
? = 0. In the active learning
consistent with some h ? H produces a hypothesis h
setting, where cost ? 1, it is NP-hard to obtain OPTcost (S) for general H and S. This can be
7
shown by a reduction to set-cover (Hyafil and Rivest, 1976). A simple adaptation of the reduction
for the auditing complexity, which we defer to the full version of this work, shows that it is also
NP-hard to obtain OPTcost (S) in the auditing setting.
For active learning, and for query costs that do not depend on the true hypothesis (that is cost(x, h) ?
cost(x)), Golovin and Krause (2011) showed an efficient greedy strategy that achieves a cost of
O(OPTcost (S) ? ln(|H|)) for any S. This approach has also been shown to provide considerable
performance gains in practical settings (Gonen et al., 2013). The greedy strategy consists of iteratively selecting a point whose label splits the set of possible hypotheses as evenly as possible, with
a normalization proportional on the cost of each query.
We now show that for outcome-dependent costs, another greedy strategy provides similar approximation guarantees for OPTcost (S). The algorithm is defined as follows: Suppose that so far the
algorithm requested labels for x1 , . . . , xt and received the corresponding labels y1 , . . . , yt . Letting
St = {(x1 , y1 ), . . . , (xt , yt )}, denote the current version space by V (St ) = {h ? H|S | ?(x, y) ?
St , h(x) = y}. The next query selected by the algorithm is
x ? argmax min
x?S
h?H
|V (St ) \ V (St ? {(x, h(x))})|
.
cost(x, h)
That is, the algorithm selects the query that in the worst-case over the possible hypotheses, would
remove the most hypotheses from the version spaces, when normalizing by the outcome-dependent
cost of the query. The algorithm terminates when |V (St )| = 1, and returns the single hypothesis in
the version space.
Theorem 5.1. For any cost function cost, hypothesis class H, pool S, and true hypothesis h ? H,
the cost of the proposed algorithm is at most (ln(|H|S | ? 1) + 1) ? OPT.
If cost is the auditing cost, the proposed algorithm corresponds to the following intuitive strategy: At
every round, select a query such that, if its result is a negative label, then the number of hypotheses
removed from the version space is the largest. This strategy is consistent with a simple principle
based on a partial ordering of the points: For points x, x0 in the pool, define x0 x if {h ? H |
h(x0 ) = ?1} ? {h ? H | h(x) = ?1}, so that if x0 has a negative label, so does x. In the auditing
setting, it is always preferable to query x before querying x0 . Therefore, for any realizable auditing
problem, there exists an optimal algorithm that adheres to this principle. It is thus encouraging that
our greedy algorithm is also consistent with it.
An O(ln(|H|S |)) approximation factor for auditing is less appealing than the same factor for active
learning. By information-theoretic arguments, active label complexity is at least log2 (|H|S |) (and
hence the approximation at most squares the cost), but this does not hold for auditing. Nonetheless,
hardness of approximation results for set cover (Feige, 1998), in conjunction with the reduction to
set cover of Hyafil and Rivest (1976) mentioned above, imply that such an approximation factor
cannot be avoided for a general auditing algorithm.
6
Conclusion and Future Directions
As summarized in Section 2, we show that in the auditing setting, suitable algorithms can achieve
improved costs in the settings of thresholds on the line and axis parallel rectangles. There are many
open questions suggested by our work. First, it is known that for some hypothesis classes, active
learning cannot improve over passive learning for certain distributions (Dasgupta, 2005), and the
same is true for auditing. However, exponential speedups are possible for active learning on certain
classes of distributions (Balcan et al., 2006; Dasgupta et al., 2008), in particular ones with a small
disagreement coefficient (Hanneke, 2007a). It is an open question whether a similar property of
the distribution can guarantee an improvement with auditing over active or passive learning. This
might be especially relevant to important hypothesis classes such as decision trees or halfspaces. An
interesting generalization of the auditing problem is a multiclass setting with a different cost for each
label. Finally, one may attempt to optimize other performance measures for auditing, as described
in the introduction. These measures are different from those studied in active learning, and may lead
to new algorithmic insights.
8
References
M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proceedings of the
23rd international conference on Machine learning (ICML), pages 65?72, 2006.
P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:463?482, 2002.
A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In Proceedings
of the 26th Annual International Conference on Machine Learning (ICML), pages 49?56. ACM,
2009.
J. Blocki, N. Christin, A. Dutta, and A. Sinha. Regret minimizing audits: A learning-theoretic basis
for privacy protection. In Proceedings of 24th IEEE Computer Security Foundations Symposium,
2011.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the VapnikChervonenkis dimension. Journal of the ACM, 36(4):929?965, Oct. 1989.
D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15:201?221, 1994.
S. Dasgupta. Analysis of a greedy active learning strategy. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 337?344. MIT Press,
Cambridge, MA, 2005.
S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In J. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems
20, pages 353?360. MIT Press, Cambridge, MA, 2008.
L. Devroye and G. Lugosi. Lower bounds in pattern recognition and learning. Pattern Recognition,
28(7):1011?1018, 1995.
U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4):
634?652, 1998.
D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and
stochastic optimization. Journal of Artificial Intelligence Research, 42:427?486, 2011.
A. Gonen, S. Sabato, and S. Shalev-Shwartz. Efficient active learning of halfspaces: an aggressive
approach. In The 30th International Conference on Machine Learning (ICML), 2013.
S. Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th
international conference on Machine learning, pages 353?360. ACM, 2007a.
S. Hanneke. Teaching dimension and the complexity of active learning. In Learning Theory, pages
66?81. Springer, 2007b.
S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361, 2011.
L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. Information
Processing Letters, 5(1):15?17, May 1976.
A. Kapoor, E. Horvitz, and S. Basu. Selective supervision: Guiding supervised learning with
decision-theoretic active learning. In Proceedings of IJCAI, 2007.
M. Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM),
45(6):983?1006, 1998.
S. R. Kulkarni, S. K. Mitter, and J. N. Tsitsiklis. Active learning using arbitrary binary valued
queries. Machine Learning, 11(1):23?35, 1993.
P. M. Long and L. Tan. PAC learning axis-aligned rectangles with respect to product distributions
from multiple-instance examples. Machine Learning, 30(1):7?21, 1998.
D. D. Margineantu. Active cost-sensitive learning. In Proceedings of IJCAI, 2007.
S. Sabato, A. D. Sarwate, and N. Srebro. Auditing: Active learning with outcome-dependent query
costs. arXiv preprint arXiv:1306.2347, 2013.
B. Settles, M. Craven, and L. Friedlan. Active learning with real annotation costs. In Proceedings
of the NIPS Workshop on Cost-Sensitive Learning, 2008.
V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events
to their probabilities. Theory of Probability and Its Applications, XVI(2):264?280, 1971.
9
| 4956 |@word version:8 achievable:1 polynomial:2 open:3 d2:3 seek:1 mention:1 reduction:4 selecting:2 mag:4 chervonenkis:2 tuned:1 horvitz:1 existing:1 err:50 current:1 com:1 beygelzimer:3 protection:1 must:2 refines:1 chicago:2 remove:1 atlas:1 v:2 greedy:5 selected:1 intelligence:1 warmuth:1 detecting:1 multiset:3 provides:2 location:1 hyperplanes:1 simpler:1 symposium:1 consists:1 inside:1 privacy:1 x0:10 hardness:1 nor:1 actual:1 encouraging:1 provided:1 bounded:3 unrelated:1 notation:2 agnostic:15 rivest:3 lowest:2 israel:1 what:1 argmin:2 minimizes:3 affirmative:1 finding:1 guarantee:7 mitigate:1 every:1 tie:1 preferable:1 classifier:2 platt:1 unit:1 enjoy:1 before:3 positive:11 approximately:1 lugosi:2 might:2 studied:3 co:1 bi:1 practical:1 regret:1 implement:1 sq:10 procedure:6 universal:1 significantly:6 convenient:1 confidence:2 fraud:2 get:3 cannot:4 unlabeled:8 close:3 selection:1 hyafil:3 risk:1 restriction:2 equivalent:2 optimize:1 yt:2 independently:1 d12:1 simplicity:1 formalized:1 immediately:2 insight:1 haussler:1 coordinate:4 annals:1 construction:2 tan:2 suppose:1 us:3 hypothesis:39 origin:2 recognition:2 asymmetric:1 labeled:5 observed:1 preprint:1 worst:2 region:1 ordering:1 removed:2 highest:3 halfspaces:2 balanced:1 intuition:1 mentioned:1 complexity:77 raise:1 depend:3 incur:1 completely:1 basis:1 easily:1 distinct:2 describe:1 query:36 artificial:1 labeling:9 outcome:9 outside:3 refined:1 disjunction:4 quite:2 whose:1 larger:1 shalev:1 valued:1 say:1 drawing:1 statistic:1 itself:1 propose:3 product:2 adaptation:3 aligned:9 relevant:1 kapoor:3 achieve:3 roweis:1 intuitive:1 convergence:3 empty:1 ijcai:2 rademacher:1 produce:1 tti:2 depending:1 x0i:2 blocki:2 received:1 skip:1 implies:2 direction:4 submodularity:1 stochastic:1 vc:4 settle:2 suffices:1 generalization:3 investigation:1 opt:1 extension:1 hold:4 sufficiently:1 credit:2 mapping:1 algorithmic:1 achieves:2 smallest:1 omitted:1 label:70 sensitive:2 largest:2 agrees:1 establishes:1 weighted:1 hope:1 mit:2 clearly:1 always:2 gaussian:1 aim:1 avoid:1 conjunction:1 corollary:3 exemplifies:1 focus:2 refining:1 improvement:5 vapnikchervonenkis:1 contrast:4 realizable:15 dependent:8 typically:1 bt:3 entire:1 hidden:5 koller:1 selective:1 labelings:1 selects:1 classification:1 priori:1 special:1 sampling:1 represents:1 icml:3 future:1 np:3 distinguishes:1 composed:1 replaced:1 argmax:1 microsoft:2 attempt:2 detection:1 huge:1 possibility:3 extreme:2 behind:1 auditing:86 partial:1 necessary:1 tree:2 circle:1 minimal:5 sinha:1 instance:2 classify:1 cover:4 assignment:1 cost:47 addressing:1 subset:3 uniform:1 technion:1 margineantu:2 learnability:1 answer:4 scanning:1 st:9 international:4 pool:29 unavoidable:2 r2d:2 return:10 aggressive:1 summarized:2 includes:1 coefficient:1 depends:3 multiplicative:1 observing:1 competitive:2 sort:1 start:1 parallel:1 annotation:1 defer:1 minimize:2 square:1 dutta:1 accuracy:1 christin:1 yield:1 hanneke:8 classified:1 monteleoni:1 definition:2 nonetheless:1 frequency:1 proof:5 stop:3 gain:1 hsu:1 ask:1 improves:1 dimensionality:1 higher:1 originally:1 supervised:1 improved:1 wei:1 until:4 langford:2 su:4 cohn:2 perhaps:1 true:9 hence:1 iteratively:3 ehrenfeucht:1 round:3 sin:1 game:1 ln2:4 presenting:1 theoretic:5 demonstrate:1 complete:1 performs:1 passive:12 balcan:3 wise:1 consideration:2 exponentially:1 sarwate:2 significant:2 cambridge:2 ai:1 queried:5 rd:13 similarly:1 teaching:1 access:3 entail:1 supervision:1 maxh:1 add:1 closest:1 showed:2 certain:3 inequality:1 binary:6 arbitrarily:1 vt:3 additional:1 maximize:1 full:2 multiple:1 england:1 long:3 arxiv:2 sometimes:1 normalization:1 achieved:1 whereas:1 want:1 krause:3 sabato:7 unlike:1 sure:1 duplication:1 anand:1 integer:1 structural:1 split:1 enough:2 easy:4 perfectly:1 reduce:1 idea:2 multiclass:2 requesting:1 honest:2 whether:6 bartlett:2 penalty:1 returned:3 generally:1 listed:1 blog2:1 extensively:1 reduced:2 exist:1 dasgupta:7 sivan:2 threshold:25 demonstrating:1 achieving:1 drawn:2 wasteful:2 neither:1 rectangle:21 fraction:1 letter:1 fraudulent:1 draw:7 decision:3 bound:23 pay:2 quadratic:1 refine:1 annual:1 nathan:1 argument:1 min:10 speedup:1 craven:1 smaller:3 slightly:1 terminates:1 feige:2 appealing:1 s1:2 restricted:2 sbt:5 ln:24 remains:1 turn:1 needed:1 know:3 letting:1 singer:1 end:5 observe:1 disagreement:1 assumes:1 include:1 log2:7 especially:1 approximating:1 classical:1 question:8 strategy:6 dependence:12 evenly:1 devroye:2 relationship:1 minimizing:5 setup:1 mostly:1 negative:30 design:1 unknown:1 upper:7 ladner:1 minh:2 finite:1 orthant:2 beat:1 situation:1 extended:3 ever:1 y1:2 arbitrary:2 sharp:1 ttic:2 introduced:1 pair:2 required:4 hyperrectangle:1 optimized:1 security:1 xvi:1 nip:1 address:1 suggested:1 usually:1 below:3 xm:3 pattern:2 regime:4 gonen:2 max:25 including:1 suitable:1 event:1 natural:3 indicator:1 improve:3 technology:1 imply:1 axis:10 nati:1 relative:3 loss:1 interesting:3 proportional:1 srebro:2 versus:1 proven:1 querying:2 h2:36 foundation:1 incurred:1 sufficient:1 consistent:3 s0:3 principle:2 editor:2 summary:1 last:2 free:1 copy:3 tsitsiklis:1 side:1 institute:1 saul:1 basu:1 barrier:1 dimension:7 adaptive:1 avoided:2 far:1 polynomially:1 transaction:3 excess:1 obtains:1 keep:1 active:71 investigating:1 sequentially:3 reveals:1 b1:1 tolerant:1 conclude:1 xi:3 unqueried:3 shwartz:1 search:2 continuous:1 table:2 learn:6 golovin:3 improving:1 requested:2 alg:10 adheres:1 bottou:1 complex:1 constructing:1 domain:4 main:3 motivation:1 s2:3 bounding:1 noise:1 x1:5 representative:1 mitter:1 sub:1 guiding:1 wish:2 exponential:2 breaking:1 learns:7 theorem:10 xt:2 showing:2 pac:1 normalizing:1 mendelson:2 exists:3 workshop:1 vapnik:2 albeit:1 importance:1 budget:1 gap:2 intersection:1 logarithmic:3 jacm:2 springer:1 corresponds:1 satisfies:1 acm:5 ma:2 oct:1 formulated:1 blumer:2 presentation:1 invalid:1 considerable:1 hard:2 determined:1 except:1 reducing:1 uniformly:2 lemma:6 kearns:2 total:3 select:3 audit:3 rename:1 scan:1 kulkarni:3 argminh:2 avoiding:1 |
4,372 | 4,957 | Buy-in-Bulk Active Learning
Jaime Carbonell
Language Technologies Institute,
Carnegie Mellon University
[email protected]
Liu Yang
Machine Learning Department,
Carnegie Mellon University
[email protected]
Abstract
In many practical applications of active learning, it is more cost-effective to request labels in large batches, rather than one-at-a-time. This is because the cost of
labeling a large batch of examples at once is often sublinear in the number of examples in the batch. In this work, we study the label complexity of active learning
algorithms that request labels in a given number of batches, as well as the tradeoff
between the total number of queries and the number of rounds allowed. We additionally study the total cost sufficient for learning, for an abstract notion of the
cost of requesting the labels of a given number of examples at once. In particular,
we find that for sublinear cost functions, it is often desirable to request labels in
large batches (i.e., buying in bulk); although this may increase the total number of
labels requested, it reduces the total cost required for learning.
1
Introduction
In many practical applications of active learning, the cost to acquire a large batch of labels at once is
significantly less than the cost of the same number of sequential rounds of individual label requests.
This is true for both practical reasons (overhead time for start-up, reserving equipment in discrete
time-blocks, multiple labelers working in parallel, etc.) and for computational reasons (e.g., time
to update the learner?s hypothesis and select the next examples may be large). Consider making
one vs multiple hematological diagnostic tests on an out-patient. There are fixed up-front costs:
bringing the patient in for testing, drawing and storing the blood, entring the information in the
hospital record system, etc. And there are variable costs, per specific test. Consider a microarray
assay for gene expression data. There is a fixed cost in setting up and running the microarray, but
virtually no incremental cost as to the number of samples, just a constraint on the max allowed.
Either of the above conditions are often the case in scientific experiments (e.g., [1]), As a different
example, consider calling a focused group of experts to address questions w.r.t new product design
or introduction. There is a fixed cost in forming the group (determine membership, contract, travel,
etc.), and a incremental per-question cost. The common abstraction in such real-world versions
of ?oracles? is that learning can buy-in-bulk to advantage because oracles charge either per batch
(answering a batch of questions for the same cost as answering a single question up to a batch
maximum), or the cost per batch is axp + b, where b is the set-up cost, x is the number of queries,
and p = 1 or p < 1 (for the case where practice yields efficiency).
Often we have other tradeoffs, such as delay vs testing cost. For instance in a medical diagnosis case,
the most cost-effective way to minimize diagnostic tests is purely sequential active learning, where
each test may rule out a set of hypotheses (diagnoses) and informs the next test to perform. But
a patient suffering from a serious disease may worsen while sequential tests are being conducted.
Hence batch testing makes sense if the batch can be tested in parallel. In general one can convert
delay into a second cost factor and optimize for batch size that minimizes a combination of total
delay and the sum of the costs for the individual tests. Parallelizing means more tests would be
needed, since we lack the benefit of earlier tests to rule out future ones. In order to perform this
1
batch-size optimization we also need to estimate the number of redundant tests incurred by turning
a sequence into a shorter sequence of batches.
For the reasons cited above, it can be very useful in practice to generalize active learning to activebatch learning, with buy-in-bulk discounts. This paper developes a theoretical framework exploring
the bounds and sample compelxity of active buy-in-bulk machine learning, and analyzing the tradeoff that can be achieved between the number of batches and the total number of queries required for
accurate learning.
In another example, if we have many labelers (virtually unlimited) operating in parallel, but must
pay for each query, and the amount of time to get back the answer to each query is considered
independent with some distribution, it may often be the case that the expected amount of time
needed to get back the answers to m queries is sublinear in m, so that if the ?cost? is a function
of both the payment amounts and the time, it might sometimes be less costly to submit multiple
queries to be labeled in parallel. In scenarios such as those mentioned above, a batch mode active
learning strategy is desirable, rather than a method that selects instances to be labeled one-at-a-time.
There have recently been several attempts to construct heuristic approaches to the batch mode active
learning problem (e.g., [2]). However, theoretical analysis has been largely lacking. In contrast,
there has recently been significant progress in understanding the advantages of fully-sequential active learning (e.g., [3, 4, 5, 6, 7]). In the present work, we are interested in extending the techniques
used for the fully-sequential active learning model, studying natural analogues of them for the batchmodel active learning model.
Formally, we are interested in two quantities: the sample complexity and the total cost. The sample
complexity refers to the number of label requests used by the algorithm. We expect batch-mode
active learning methods to use more label requests than their fully-sequential cousins. On the other
hand, if the cost to obtain a batch of labels is sublinear in the size of the batch, then we may
sometimes expect the total cost used by a batch-mode learning method to be significantly less than
the analogous fully-sequential algorithms, which request labels individually.
2
Definitions and Notation
As in the usual statistical learning problem, there is a standard Borel space X , called the instance
space, and a set C of measurable classifiers h : X ? {?1, +1}, called the concept space. Throughout, we suppose that the VC dimension of C, denoted d below, is finite.
In the learning problem, there is an unobservable distribution DXY over X ? {?1, +1}. Based on
this quantity, we let Z = {(Xt , Yt )}?
t=1 denote an infinite sequence of independent DXY -distributed
random variables. We also denote by Zt = {(X1 , Y1 ), (X2 , Y2 ), . . . , (Xt , Yt )} the first t such
labeled examples. Additionally denote by DX the marginal distribution of DXY over X . For a
classifier h : X ? {?1, +1}, denote er(h) = P(X,Y )?DXY (h(X) 6= Y ), the error rate of h.
P
1
Additionally, for m ? N and Q ? (X ? {?1, +1})m , let er(h; Q) = |Q|
(x,y)?Q I[h(x) 6= y],
the empirical error rate of h. In the special case that Q = Zm , abbreviate erm (h) = er(h; Q).
For r > 0, define B(h, r) = {g ? C : DX (x : h(x) 6= g(x)) ? r}. For any H ? C, define
DIS(H) = {x ? X : ?h, g ? H s.t. h(x) 6= g(x)}. We also denote by ?(x) = P (Y = +1|X = x),
where (X, Y ) ? DXY , and let h? (x) = sign(?(x) ? 1/2) denote the Bayes optimal classifier.
In the active learning protocol, the algorithm has direct access to the Xt sequence, but must request
to observe each label Yt , sequentially. The algorithm asks up to a specified number of label requests
n (the budget), and then halts and returns a classifier. We are particularly interested in determining,
for a given algorithm, how large this number of label requests needs to be in order to guarantee
small error rate with high probability, a value known as the label complexity. In the present work,
we are also interested in the cost expended by the algorithm. Specifically, in this context, there
is a cost function c : N ? (0, ?), and to request the labels {Yi1 , Yi2 , . . . , Yim } of m examples
{Xi1 , Xi2 , . . . , Xim } at once requires the algorithm to pay c(m); we are then interested in the sum
of these costs, over all batches of label requests made by the algorithm. Depending on the form
of the cost function, minimizing the cost of learning may actually require the algorithm to request
labels in batches, which we expect would actually increase the total number of label requests.
2
To help quantify the label complexity and cost complexity, we make use of the following definition,
due to [6, 7].
Definition 2.1. [6, 7] Define the disagreement coefficient of h? as
?(?) = sup
r>?
3
DX (DIS(B(h? , r)))
.
r
Buy-in-Bulk Active Learning in the Realizable Case: k-batch CAL
We begin our anlaysis with the simplest case: namely, the realizable case, with a fixed prespecified
number of batches. We are then interested in quantifying the label complexity for such a scenario.
Formally, in this section we suppose h? ? C and er(h? ) = 0. This is refered to as the realizable
case. We first review a well-known method for active learning in the realizable case, refered to as
CAL after its discoverers Cohn, Atlas, and Ladner [8].
Algorithm: CAL(n)
1. t ? 0, m ? 0, Q ? ?
2. While t < n
3. m ? m + 1
4. If max min er(h; Q ? {(Xm , y)}) = 0
y?{?1,+1} h?C
5.
Request Ym , let Q ? Q ? {(Xm , Ym )}, t ? t + 1
? = argmin
6. Return h
h?C er(h; Q)
The label complexity of CAL is known to be O (?(?)(d log(?(?)) + log(log(1/?)/?)) log(1/?)) [7].
That is, some n of this size suffices to guarantee that, with probability 1 ? ?, the returned classifier
? has er(h)
? ? ?.
h
One particularly simple way to modify this algorithm to make it batch-based is to simply divide up
the budget into equal batch sizes. This yields the following method, which we refer to as k-batch
CAL, where k ? {1, . . . , n}.
Algorithm: k-batch CAL(n)
1. Let Q ? {}, b ? 2, V ? C
2. For m = 1, 2, . . .
3. If Xm ? DIS(V )
4.
Q ? Q ? {Xm }
5. If |Q| = ?n/k?
6.
Request the labels of examples in Q
7.
Let L be the corresponding labeled examples
8.
V ? {h ? V : er(h; L) = 0}
9.
b ? b + 1 and Q ? ?
??V
10.
If b > k, Return any h
We expect the label complexity of k-batch CAL to somehow interpolate between passive learning
(at k = 1) and the label complexity of CAL (at k = n). Indeed, the following theorem bounds the
label complexity of k-batch CAL by a function that exhibits this interpolation behavior with respect
to the known upper bounds for these two cases.
Theorem 3.1. In the realizable case, for some
?(?, ?) = O k??1/k ?(?)1?1/k (d log(1/?) + log(1/?)) ,
for any n ? ?(?, ?), with probability at least 1 ? ?, running k-batch CAL with budget n produces a
? with er(h)
? ? ?.
classifier h
Proof. Let M = ?n/k?. Define V0 = C and i0M = 0. Generally, for b ? 1, let ib1 , ib2 , . . . , ibM
denote the indices i of the first M points Xi ? DIS(Vb?1 ) for which i > i(b?1)M , and let Vb = {h ?
3
Vb?1 : ?j ? M, h(Xibj ) = h? (Xibj )}. These correspond to the version space at the conclusion of
batch b in the k-batch CAL algorithm.
Note that Xib1 , . . . , XibM are conditionally iid given Vb?1 , with distribution of X given X ?
DIS(Vb?1 ). Thus, the PAC bound of [9] implies that, for some constant c ? (0, ?), with probability ? 1 ? ?/k,
? d log(M/d) + log(k/?)
P (DIS(Vb?1 )) .
Vb ? B h , c
M
By a union bound, the above holds for all b ? k with probability ? 1 ? ?; suppose this is the
case. Since P (DIS(Vb?1 )) ? ?(?) max{?, maxh?Vb?1 er(h) }, and any b with maxh?Vb?1 er(h) ? ?
would also have maxh?Vb er(h) ? ?, we have
d log(M/d) + log(k/?)
max er(h) ? max ?, c
?(?) max er(h)) .
h?Vb
h?Vb?1
M
Noting that P (DIS(V0 )) ? 1 implies V1 ? B h? , c d log(M/d)+log(k/?)
, by induction we have
M
)
(
k
d log(M/d) + log(k/?)
k?1
?(?)
.
max er(h) ? max ?, c
h?Vk
M
k?1
k
For some constant c? > 0, any M ? c? ?(?)
?1/k
1
?
+ log(k/?) makes the right hand side ? ?.
k?1
1
? ?(?) k
Since M = ?n/k?, it suffices to have n ? k 1 + c ?1/k
d log ? + log(k/?) .
d log
Theorem 3.1 has the property that, when the disagreement coefficient is small, the stated bound
on the total number of label requests sufficient for learning is a decreasing function of k. This
makes sense, since ?(?) small would imply that fully-sequential active learning is much better than
passive learning. Small values of k correspond to more passive-like behavior, while larger values of
k take fuller advantage of the sequential nature of active learning. In particular, when k = 1, we
recover a well-known label complexity bound for passive learning by empirical risk minimization
[10]. In contrast, when k = log(1/?), the ??1/k factor is e (constant), and the rest of the bound is at
most O(?(?)(d log(1/?) + log(1/?)) log(1/?)), which is (up to a log factor) a well-known bound on
the label complexity of CAL for active learning [7] (a slight refinement of the proof would in fact
recover the exact bound of [7] for this case); for k larger than log(1/?), the label complexity can only
improve; for instance, consider that upon reaching a given data point Xm in the data stream, if V is
the version space in k-batch CAL (for some k), and V ? is the version space in 2k-batch CAL, then
we have V ? ? V (supposing n is a multiple of 2k), so that Xm ? DIS(V ? ) only if Xm ? DIS(V ).
Note that even k = 2 can sometimes provide significant
? reductions in label complexity over passive
learning: for instance, by a factor proportional to 1/ ? in the case that ?(?) is bounded by a finite
constant.
4
Batch Mode Active Learning with Tsybakov noise
The above analysis was for the realizable case. While this provides a particularly clean and simple
analysis, it is not sufficiently broad to cover many realistic learning applications. To move beyond
the realizable case, we need to allow the labels to be noisy, so that er(h? ) > 0. One popular noise
model in the statistical learning theory literature is Tsybakov noise, which is defined as follows.
Definition 4.1. [11] The distribution DXY satisfies Tsybakov noise if h? ? C, and for some c > 0
and ? ? [0, 1],
?
?t > 0, P(|?(x) ? 1/2| < t) < c1 t 1?? ,
equivalently, ?h, P (h(x) 6= h? (x)) ? c2 (er(h) ? er(h? ))? , where c1 and c2 are constants.
Supposing DXY satisfies Tsybakov noise, we define a quantity
1
d log(m/d) + log(km/?) 2??
.
Em = c3
m
4
based on a standard generalization bound for passive learning [12]. Specifically, [12] have shown
that, for any V ? C, with probability at least 1 ? ?/(4km2 ),
sup |(er(h) ? er(g)) ? (erm (h) ? erm (g))| < Em .
(1)
h,g?V
Consider the following modification of k-batch CAL, designed to be robust to Tsybakov noise. We
refer to this method as k-batch Robust CAL, where k ? {1, . . . , n}.
Algorithm: k-batch Robust CAL(n)
1. Let Q ? {}, b ? 1, V ? C, m1 ? 0
2. For m = 1, 2, . . .
3. If Xm ? DIS(V )
4.
Q ? Q ? {Xm }
5. If |Q| = ?n/k?
6.
Request the labels of examples in Q
7.
Let L be the corresponding labeled examples
?n/k?
8.
V ? {h ? V : (er(h; L) ? ming?V er(g; L)) m?m
? Em?mb }
b
9.
b ? b + 1 and Q ? ?
10.
mb ? m
??V
11.
If b > k, Return any h
Pk?1
?
Theorem 4.2. Under the Tsybakov noise condition, letting ? = 2??
, and ?? = i=0 ? i , for some
k !
?
2??
1+????
?
?
? k?1
1 ??
d
kd
1?
?
?
?(?, ?) = O k
d log
,
(c2 ?(c2 ?? ))
+ log
?
?
??
for any n ? ?(?, ?), with probability at least 1 ? ?, running k-batch Robust CAL with budget n
? with er(h)
? ? er(h? ) ? ?.
produces a classifier h
Proof. Let M = ?n/k?. Define i0M = 0 and V0 = C. Generally, for b ? 1, let
ib1 , ib2 , . . . , ibM denote the indices i of the first M points Xi ? DIS(Vb?1 ) for which i >
i(b?1)M , and let Qb = {(Xib1 , Yib1 ), . . . , (XibM , YibM )} and Vb = {h ? Vb?1 : (er(h; Qb ) ?
ming?Vb?1 er(g; Qb )) ibM ?iM(b?1)M ? EibM ?i(b?1)M }. These correspond to the set V at the conclusion of batch b in the k-batch Robust CAL algorithm.
For b ? {1, . . . , k}, (1) (applied under the conditional distribution given Vb?1 , combined with the law of total probability) implies that ?m > 0, letting Zb,m =
{(Xi(b?1)M +1 , Yi(b?1)M +1 ), ..., (Xi(b?1)M +m , Yi(b?1)M +m )}, with probability at least 1??/(4km2 ),
if h? ? Vb?1 , then er(h? ; Zb,m ) ? ming?Vb?1 er(g; Zb,m ) < Em , and every h ? Vb?1 with
er(h; Zb,m ) ? ming?Vb?1 er(g; Zb,m ) ? Em has er(h) ? er(h? ) < 2Em . By a union bound,
this holds for all m ? N, with probability at least 1 ? ?/(2k). In particular, this means it
holds for m = ibM ? i(b?1)M . But note that for this value of m, any h, g ? Vb?1 have
er(h; Zb,m ) ? er(g; Zb,m ) = (er(h; Qb ) ? er(g; Qb )) M
m (since for every (x, y) ? Zb,m \ Qb , either
both h and g make a mistake, or neither do). Thus if h? ? Vb?1 , we have h? ? Vb as well, and
furthermore suph?Vb er(h) ? er(h? ) < 2EibM ?i(b?1)M . By induction (over b) and a union bound,
these are satisfied for all b ? {1, . . . , k} with probability at least 1 ? ?/2. For the remainder of the
proof, we suppose this 1 ? ?/2 probability event occurs.
Next, we focus on lower bounding ibM ? i(b?1)M , again by induction. As a base case, we clearly
have i1M ? i0M ? M . Now suppose some b ? {2, . . . , k} has i(b?1)M ? i(b?2)M ? Tb?1
for some Tb?1 . Then, by the above, we have suph?Vb?1 er(h) ? er(h? ) < 2ETb?1 . By the Tsy?
bakov noise condition, this implies Vb?1 ? B h? , c2 2ETb?1
, so that if suph?Vb?1 er(h) ?
?
er(h? ) > ?, P (DIS(Vb?1 )) ? ?(c2 ?? )c2 2ETb?1 . Now note that the conditional distribution
of ibM ? i(b?1)M given Vb?1 is a negative binomial random variable with parameters M and
1 ? P (DIS(Vb?1 )) (that is, a sum of M Geometric(P (DIS(Vb?1 ))) random variables). A Chernoff bound (applied under the conditional distribution given Vb?1 ) implies that P (ibM ? i(b?1)M <
5
M/(2P (DIS(Vb?1 )))|Vb?1 ) < e?M/6 . Thus, for Vb?1 as above, with probability at least 1?e?M/6 ,
ibM ?i(b?1)M ? 2?(c2 ?? )cM
)? . Thus, we can define Tb as in the right hand side, which thereby
2 (2ET
b?1
defines a recurrence. By induction, with probability at least 1 ? ke?M/6 > 1 ? ?/2,
k?1
k?1
???
?(???
?
?
)
1
1
?
ikM ? i(k?1)M ? M ?
.
4c2 ?(c2 ?? )
2(d log(M ) + log(kM/?))
By a union bound, with probability 1??, this occurs simultaneously with the above suph?Vk er(h)?
er(h? ) < 2EikM ?i(k?1)M bound. Combining these two results yields
1
! 2??
!
k?1
?
? ???
k?1 )
?
1+?(???
(c
?(c
?
))
2
2
?
2??
(d log(M ) + log(kM/?))
.
sup er(h) ? er(h ) = O
M ??
h?Vk
Setting this to ? and solving for n, we find that it suffices to have
k
?
1+????
2??
?
?
? k?1
kd
1 ??
d
? 1? ??
+ log
d log
M ? c4
(c2 ?(c2 ? ))
,
?
?
??
for some constant c4 ? [1, ?), which then implies the stated result.
Note: the threshold Em in k-batch Robust CAL has a direct dependence on the parameters of the
Tsybakov noise condition. We have expressed the algorithm in this way only to simplify the presentation. In practice, such information is not often available. However, we can replace Em with a
? m , as in [7], which also satisfies (1), and satisdata-dependent local Rademacher complexity bound E
?
?
fies (with high probability) Em ? c Em , for some constant c? ? [1, ?) (see [13]). This modification
would therefore provide essentially the same guarantee stated above (up to constant factors), without having any direct dependence on the noise parameters, and the analysis gets only slightly more
? m estimators.
involved to account for the confidences in the concentration inequalities for these E
A similar result can also be obtained for batch-based variants of other noise-robust disagreementbased active learning algorithms from the literature (e.g., a variant of A2 [5] that uses updates based
? m estimators, in place of the traditional upper-bound/lower-bound
on quantities related to these E
construction, would also suffice).
When k = 1, Theorem 4.2 matches the best results for passive learning (up to log factors), which
are known to be minimax optimal (again, up to log factors). If we let k become large (while still
considered as a constant), our result converges to the known results for one-at-a-time active learning
with RobustCAL (again, up to log factors) [7, 14]. Although those results are not always minimax
optimal, they do represent the state-of-the-art in the general analysis of active learning, and they are
really the best we could hope for from basing our algorithm on RobustCAL.
5
Buy-in-Bulk Solutions to Cost-Adaptive Active Learning
The above sections discussed scenarios in which we have a fixed number k of batches, and we
simply bounded the label complexity achievable within that constraint by considering a variant of
CAL that uses k equal-sized batches. In this section, we take a slightly different approach to the
problem, by going back to one of the motivations for using batch-based active learning in the first
place: namely, sublinear costs for answering batches of queries at a time. If the cost of answering
m queries at once is sublinear in m, then batch-based algorithms arise naturally from the problem
of optimizing the total cost required for learning.
Formally, in this section, we suppose we are given a cost function c : (0, ?) ? (0, ?), which is
nondecreasing, satisfies c(?x) ? ?c(x) (for x, ? ? [1, ?)) , and further satisfies the condition that
?
for every q ? N, ?q ? ? N such that 2c(q) ? c(q
? ) ? 4c(q), which typically amounts to a kind of
smoothness assumption. For instance, c(q) = q would satisfy these conditions (as would many
other smooth increasing concave functions); the latter assumption can be generalized to allow other
constants, though we only study this case below for simplicity.
To understand the total cost required for learning in this model, we consider the following costadaptive modification of the CAL algorithm.
6
Algorithm: Cost-Adaptive CAL(C)
1. Q ? ?, R ? DIS(C), V ? C, t ? 0
2. Repeat
3. q ? 1
4. Do until P (DIS(V )) ? P (R)/2
5.
Let q ? > q be minimal such that c(q ? ? q) ? 2c(q)
??V
6.
If c(q ? ? q) + t > C, Return any h
7.
Request the labels of the next q ? ? q examples in DIS(V )
8.
Update V by removing those classifiers inconsistent with these labels
9.
Let t ? t + c(q ? ? q)
10.
q ? q?
11. R ? DIS(V )
Note that the total cost expended by this method never exceeds the budget argument C. We have the
following result on how large of a budget C is sufficient for this method to succeed.
Theorem 5.1. In the realizable case, for some
?(?, ?) = O c ?(?) (d log(?(?)) + log(log(1/?)/?)) log(1/?) ,
?
for any C ? ?(?, ?), with probability at least 1 ? ?, Cost-Adaptive CAL(C) returns a classifier h
?
with er(h) ? ?.
Proof. Supposing an unlimited budget (C = ?), let us determine how much cost the algorithm
incurs prior to having suph?V er(h) ? ?; this cost would then be a sufficient size for C to guarantee
this occurs. First, note that h? ? V is maintained as an invariant throughout the algorithm. Also,
note that if q is ever at least as large as O(?(?)(d log(?(?)) + log(1/? ? ))), then as in the analysis for
CAL [7], we can conclude (via the PAC bound of [9]) that with probability at least 1 ? ? ? ,
sup P (h(X) 6= h? (X)|X ? R) ? 1/(2?(?)),
h?V
so that
sup er(h) = sup P (h(X) 6= h? (X)|X ? R)P (R) ? P (R)/(2?(?)).
h?V
h?V
?
We know R = DIS(V ) for the set V ? which was the value of the variable V at the time this R was
obtained. Supposing suph?V ? er(h) > ?, we know (by the definition of ?(?)) that
?
P (R) ? P DIS B h , sup er(h)
? ?(?) sup er(h).
h?V ?
h?V ?
Therefore,
1
sup er(h).
2
h?V ?
h?V
In particular, this implies the condition in Step 4 will be satisfied if this happens while
suph?V er(h) > ?. But this condition can be satisfied at most ?log2 (1/?)? times while
suph?V er(h) > ? (since suph?V er(h) ? P (DIS(V ))). So with probability at least 1 ?
? ? ?log2 (1/?)?, as long as suph?V er(h) > ?, we always have c(q) ? 4c(O(?(?)(d log(?(?)) +
log(1/? ? )))) ? O(c(?(?)(d log(?(?)) + log(1/? ? )))). Letting ? ? = ?/?log2 (1/?)?, this is
1 ? ?. So for each round of the outer loop while suph?V er(h) > ?, by summing the geometric series of cost values c(q ? ? q) in the inner loop, we find the total cost incurred is at most
O(c(?(?)(d log(?(?)) + log(log(1/?)/?)))). Again, there are at most ?log2 (1/?)? rounds of the
outer loop while suph?V er(h) > ?, so that the total cost incurred before we have suph?V er(h) ? ?
is at most O(c(?(?)(d log(?(?)) + log(log(1/?)/?))) log(1/?)).
sup er(h) ?
Comparing this result to the known label complexity of CAL, which is (from [7])
O (?(?) (d log(?(?)) + log(log(1/?)/?)) log(1/?)) ,
we see that the major factor, namely the O (?(?) (d log(?(?)) + log(log(1/?)/?))) factor, is now
inside the argument to the cost function c(?). In particular, when this cost function is sublinear, we
7
expect this bound to be significantly smaller than the cost required by the original fully-sequential
CAL algorithm, which uses batches of size 1, so that there is a significant advantage to using this
batch-mode active learning algorithm.
Again, this result is formulated for the realizable case for simplicity, but can easily be extended to
the Tsybakov noise model as in the previous section. In particular, by reasoning quite similar to
? ?
that above, a cost-adaptive variant of the Robust CAL algorithm of [14] achieves error rate er(h)
?
er(h ) ? ? with probability at least 1 ? ? using a total cost
O c ?(c2 ?? )c22 ?2??2 dpolylog (1/(??)) log (1/?) .
We omit the technical details for brevity. However, the idea is similar to that above, except
that the update to the set V is now as in k-batch Robust CAL (with an appropriate modification to the ?-related logarithmic factor in Em ), rather than simply those classifiers making no
mistakes. The proof then follows analogous to that of Theorem 5.1, the only major change being that now we bound the number of unlabeled examples processed in the inner loop before
suph?V P (h(X) 6= h? (X)) ? P (R)/(2?); letting V ? be the previous version space (the one for
which R = DIS(V ? )), we have P (R) ? ?c2 (suph?V ? er(h) ? er(h? ))? , so that it suffices to have
suph?V P (h(X) 6= h? (X)) ? (c2 /2)(suph?V ? er(h) ? er(h? ))? , and for this it suffices to have
suph?V er(h) ? er(h? ) ? 2?1/? suph?V ? er(h) ? er(h? ); by inverting Em , we find that it suffices to
? (2?1/? suph?V ? er(h) ? er(h? ))??2 d . Since the number of label rehave a number of samples O
?
?
quests among m samples in the inner loop is roughly O(mP
(R)) ? O(m?c
2 (suph?V ? er(h) ?
? ?
?
er(h )) ), the batch size needed to make suph?V P (h(X) 6= h (X)) ? P (R)/(2?) is at
? ?c2 22/? (suph?V ? er(h) ? er(h? ))2??2 d . When suph?V ? er(h) ? er(h? ) > ?, this is
most O
? ?c2 22/? ?2??2 d . If suph?V P (h(X) 6= h? (X)) ? P (R)/(2?) is ever satisfied, then by the
O
same reasoning as above, the update condition in Step 4 would be satisfied. Again, this update can
be satisfied at most log(1/?) times before achieving suph?V er(h) ? er(h? ) ? ?.
6
Conclusions
We have seen that the analysis of active learning can be adapted to the setting in which labels are
requested in batches. We studied this in two related models of learning. In the first case, we supposed
the number k of batches is specified, and we analyzed the number of label requests used by an
algorithm that requested labels in k equal-sized batches. As a function of k, this label complexity
became closer to that of the analogous results for fully-sequential active learning for larger values of
k, and closer to the label complexity of passive learning for smaller values of k, as one would expect.
Our second model was based on a notion of the cost to request the labels of a batch of a given size.
We studied an active learning algorithm designed for this setting, and found that the total cost used
by this algorithm may often be significantly smaller than that used by the analogous fully-sequential
active learning methods, particularly when the cost function is sublinear.
There are many active learning algorithms in the literature that can be described (or analyzed) in
terms of batches of label requests. For instance, this is the case for the margin-based active learning
strategy explored by [15]. Here we have only studied variants of CAL (and its noise-robust generalization). However, one could also apply this style of analysis to other methods, to investigate
analogous questions of how the label complexities of such methods degrade as the batch sizes increase, or how such methods might be modified to account for a sublinear cost function, and what
results one might obtain on the total cost of learning with these modified methods. This could
potentially be a fruitful future direction for the study of batch mode active learning.
The tradeoff between the total number of queries and the number of rounds examined in this paper is
natural to study. Similar tradeoffs have been studied in other contexts. In any two-party communication task, there are three measures of complexity that are typically used: communication complexity
(the total number of bits exchanged), round complexity (the number of rounds of communication),
and time complexity. The classic work [16] considered the problem of the tradeoffs between communication complexity and rounds of communication. [17] studies the tradeoffs among all three of
communication complexity, round complexity, and time complexity. Interested readers may wish
to go beyond the present and to study the tradeoffs among all the three measures of complexity for
batch mode active learning.
8
References
[1] V. S. Sheng and C. X. Ling. Feature value acquisition in testing: a sequential batch test
algorithm. In Proceedings of the 23rd international conference on Machine learning, 2006.
[2] S. Chakraborty, V. Balasubramanian, and S. Panchanathan. An optimization based framework
for dynamic batch mode active learning. In Advances in Neural Information Processing, 2010.
[3] S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281?299, 2009.
[4] S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural
Information Processing Systems 18, 2005.
[5] M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proc. of the 23rd
International Conference on Machine Learning, 2006.
[6] S. Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of
the 24th International Conference on Machine Learning, 2007.
[7] S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361,
2011.
[8] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine
Learning, 15(2):201?221, 1994.
[9] V. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer-Verlag, New York,
1982.
[10] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, 1999.
[11] E. Mammen and A.B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics,
27:1808?1829, 1999.
? N?ed?elec. Risk bounds for statistical learning. The Annals of Statistics,
[12] P. Massart and E.
34(5):2326?2366, 2006.
[13] V. Koltchinskii. Local rademacher complexities and oracle inequalities in risk minimization.
The Annals of Statistics, 34(6):2593?2656, 2006.
[14] S. Hanneke. Activized learning: Transforming passive to active with improved label complexity. Journal of Machine Learning Research, 13(5):1469?1587, 2012.
[15] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Proceedings of the
20th Conference on Learning Theory, 2007.
[16] C. H. Papadimitriou and M. Sipser. Communication complexity. Journal of Computer and
System Sciences, 28(2):260269, 1984.
[17] P. Harsha, Y. Ishai, Joe Kilian, Kobbi Nissim, and Srinivasan Venkatesh. Communication
versus computation. In The 31st International Colloquium on Automata, Languages and Programming, pages 745?756, 2004.
9
| 4957 |@word version:5 achievable:1 chakraborty:1 km:3 incurs:1 asks:1 thereby:1 reduction:1 liu:1 series:1 comparing:1 beygelzimer:1 dx:3 must:2 realistic:1 atlas:2 designed:2 update:6 v:2 discrimination:1 yi1:1 prespecified:1 record:1 provides:1 coarse:1 c22:1 zhang:1 c2:17 direct:3 become:1 overhead:1 inside:1 indeed:1 expected:1 roughly:1 behavior:2 buying:1 ming:4 decreasing:1 balasubramanian:1 considering:1 increasing:1 begin:1 notation:1 bounded:2 suffice:1 agnostic:2 what:1 argmin:1 cm:1 minimizes:1 kind:1 guarantee:4 every:3 charge:1 concave:1 jgc:1 classifier:10 medical:1 omit:1 before:3 local:2 modify:1 mistake:2 analyzing:1 interpolation:1 might:3 koltchinskii:1 studied:4 examined:1 practical:3 testing:4 practice:3 block:1 union:4 empirical:3 significantly:4 hematological:1 confidence:1 refers:1 get:3 unlabeled:1 cal:30 risk:3 context:2 optimize:1 jaime:1 measurable:1 fruitful:1 yt:3 go:1 automaton:1 focused:1 ke:1 simplicity:2 rule:2 estimator:2 classic:1 notion:2 analogous:5 annals:4 construction:1 suppose:6 exact:1 programming:1 us:3 hypothesis:2 particularly:4 labeled:5 kilian:1 disease:1 mentioned:1 transforming:1 colloquium:1 complexity:35 dynamic:1 solving:1 purely:1 upon:1 efficiency:1 learner:1 easily:1 elec:1 effective:2 query:10 labeling:1 quite:1 heuristic:1 larger:3 drawing:1 statistic:4 nondecreasing:1 noisy:1 advantage:4 sequence:4 product:1 mb:2 zm:1 km2:2 remainder:1 combining:1 loop:5 supposed:1 convergence:1 xim:1 extending:1 rademacher:2 produce:2 incremental:2 converges:1 help:1 depending:1 informs:1 progress:1 c:2 implies:7 quantify:1 direction:1 vc:1 require:1 suffices:6 generalization:3 really:1 im:1 exploring:1 hold:3 sufficiently:1 considered:3 major:2 achieves:1 a2:1 estimation:1 proc:1 travel:1 label:49 individually:1 basing:1 minimization:2 hope:1 clearly:1 always:2 modified:2 rather:3 reaching:1 ikm:1 kalai:1 focus:1 vk:3 contrast:2 equipment:1 sense:2 realizable:9 abstraction:1 dependent:1 membership:1 typically:2 i1m:1 going:1 selects:1 interested:7 unobservable:1 among:3 denoted:1 art:1 special:1 marginal:1 equal:3 once:5 construct:1 fuller:1 having:2 never:1 chernoff:1 broad:1 future:2 papadimitriou:1 simplify:1 serious:1 simultaneously:1 interpolate:1 individual:2 attempt:1 investigate:1 analyzed:2 accurate:1 closer:2 shorter:1 divide:1 exchanged:1 theoretical:3 minimal:1 instance:7 earlier:1 cover:1 cost:55 delay:3 conducted:1 front:1 dependency:1 answer:2 ishai:1 combined:1 st:1 cited:1 international:4 broder:1 contract:1 xi1:1 ym:2 again:6 satisfied:6 expert:1 style:1 return:6 kobbi:1 expended:2 account:2 coefficient:2 satisfy:1 mp:1 sipser:1 stream:1 sup:10 start:1 bayes:1 recover:2 parallel:4 worsen:1 minimize:1 became:1 largely:1 yield:3 correspond:3 generalize:1 iid:1 hanneke:3 monteleoni:1 ed:1 definition:5 acquisition:1 involved:1 naturally:1 proof:6 popular:1 actually:2 back:3 axp:1 improved:1 though:1 furthermore:1 just:1 langford:1 until:1 working:1 hand:3 sheng:1 cohn:2 lack:1 somehow:1 defines:1 mode:9 scientific:1 concept:1 true:1 y2:1 hence:1 assay:1 conditionally:1 round:9 recurrence:1 maintained:1 mammen:1 generalized:1 passive:9 balcan:2 reasoning:2 recently:2 common:1 discussed:1 slight:1 m1:1 mellon:2 significant:3 refer:2 cambridge:1 smoothness:1 rd:2 language:2 panchanathan:1 access:1 operating:1 v0:3 etc:3 labelers:2 base:1 maxh:3 optimizing:1 scenario:3 liuy:1 verlag:1 inequality:2 yi:2 seen:1 determine:2 redundant:1 multiple:4 desirable:2 reduces:1 smooth:2 exceeds:1 match:1 technical:1 long:1 halt:1 variant:5 patient:3 cmu:2 essentially:1 sometimes:3 represent:1 achieved:1 c1:2 microarray:2 rest:1 bringing:1 massart:1 supposing:4 virtually:2 inconsistent:1 yang:1 noting:1 inner:3 idea:1 tradeoff:8 requesting:1 cousin:1 expression:1 bartlett:1 returned:1 york:1 useful:1 generally:2 reserving:1 amount:4 discount:1 tsybakov:9 processed:1 simplest:1 sign:1 diagnostic:2 per:4 bulk:7 diagnosis:2 carnegie:2 discrete:1 dasgupta:2 srinivasan:1 group:2 threshold:1 blood:1 achieving:1 neither:1 clean:1 v1:1 convert:1 sum:3 place:2 throughout:2 reader:1 vb:37 bit:1 bound:25 pay:2 i0m:3 oracle:3 adapted:1 constraint:2 x2:1 unlimited:2 calling:1 etb:3 argument:2 min:1 qb:6 department:1 request:22 combination:1 kd:2 smaller:3 slightly:2 em:12 making:2 modification:4 happens:1 refered:2 invariant:1 erm:3 ib1:2 payment:1 xi2:1 needed:3 know:2 letting:4 studying:1 available:1 apply:1 observe:1 harsha:1 appropriate:1 disagreement:2 yim:1 batch:65 disagreementbased:1 original:1 binomial:1 running:3 log2:4 move:1 question:5 quantity:4 occurs:3 strategy:2 costly:1 dependence:2 usual:1 concentration:1 traditional:1 exhibit:1 outer:2 carbonell:1 degrade:1 nissim:1 reason:3 induction:4 index:2 minimizing:1 acquire:1 equivalently:1 potentially:1 stated:3 negative:1 design:1 zt:1 perform:2 ladner:2 upper:2 finite:2 extended:1 ever:2 communication:8 y1:1 parallelizing:1 inverting:1 namely:3 required:5 specified:2 c3:1 venkatesh:1 c4:2 address:1 beyond:2 below:2 xm:9 tb:3 max:8 analogue:1 event:1 natural:2 turning:1 abbreviate:1 minimax:2 improve:1 technology:1 imply:1 review:1 understanding:1 literature:3 geometric:2 prior:1 determining:1 law:1 lacking:1 fully:8 expect:6 sublinear:9 suph:26 proportional:1 discoverer:1 versus:1 foundation:1 incurred:3 sufficient:4 storing:1 ibm:8 repeat:1 dis:24 side:2 allow:2 understand:1 perceptron:1 institute:1 benefit:1 distributed:1 dimension:1 world:1 fies:1 made:1 refinement:1 adaptive:4 party:1 ib2:2 gene:1 active:43 buy:6 sequentially:1 summing:1 conclude:1 xi:4 additionally:3 nature:1 robust:10 improving:1 requested:3 anthony:1 protocol:1 submit:1 pk:1 yi2:1 bounding:1 noise:13 motivation:1 arise:1 ling:1 allowed:2 suffering:1 x1:1 borel:1 wish:1 answering:4 theorem:7 removing:1 specific:1 xt:3 pac:2 er:81 explored:1 joe:1 vapnik:1 sequential:13 budget:7 anlaysis:1 margin:2 logarithmic:1 simply:3 forming:1 expressed:1 springer:1 satisfies:5 succeed:1 conditional:3 sized:2 presentation:1 formulated:1 quantifying:1 replace:1 change:1 infinite:1 specifically:2 except:1 zb:8 total:21 hospital:1 called:2 select:1 formally:3 quest:1 latter:1 brevity:1 dxy:7 tested:1 |
4,373 | 4,958 | Active Learning for Probabilistic Hypotheses Using
the Maximum Gibbs Error Criterion
Nguyen Viet Cuong
Wee Sun Lee
Nan Ye
Department of Computer Science
National University of Singapore
{nvcuong,leews,yenan}@comp.nus.edu.sg
Kian Ming A. Chai
Hai Leong Chieu
DSO National Laboratories, Singapore
{ckianmin,chaileon}@dso.org.sg
Abstract
We introduce a new objective function for pool-based Bayesian active learning
with probabilistic hypotheses. This objective function, called the policy Gibbs
error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact
maximization of the policy Gibbs error is hard, so we propose a greedy strategy
that maximizes the Gibbs error at each iteration, where the Gibbs error on an
instance is the expected error of a random classifier selected from the posterior
label distribution on that instance. We apply this maximum Gibbs error criterion
to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy
Gibbs error when constrained to a fixed budget. For practical implementations,
we provide approximations to the maximum Gibbs error criterion for Bayesian
conditional random fields and transductive Naive Bayes. Our experimental results on a named entity recognition task and a text classification task show that the
maximum Gibbs error criterion is an effective active learning criterion for noisy
models.
1
Introduction
In pool-based active learning [1], we select training data from a finite set (called a pool) of unlabeled
examples and aim to obtain good performance on the set by asking for as few labels as possible. If a
large enough pool is sampled from the true distribution, good performance of a classifier on the pool
implies good generalization performance of the classifier. Previous theoretical works on Bayesian
active learning mainly deal with the noiseless case, which assumes a prior distribution on a collection
of deterministic mappings from observations to labels [2, 3]. A fixed deterministic mapping is then
drawn from the prior, and it is used to label the examples.
In this paper, probabilistic hypotheses, rather than deterministic ones, are used to label the examples.
We formulate the objective as a maximum coverage objective with a fixed budget: with a budget of
k queries, we aim to select k examples such that the policy Gibbs error is maximal. The policy
Gibbs error of a policy is the expected error rate of a Gibbs classifier1 on the set adaptively selected
by the policy. The policy Gibbs error is a lower bound of the policy entropy, a generalization of the
Shannon entropy to general (both adaptive and non-adaptive) policies. For non-adaptive policies,
1
A Gibbs classifier samples a hypothesis from the prior for labeling.
1
x1
y1 = 1
y1 = 2
x1
y1 = 1
???
x2
y2 = 1
y2 = 2
x4
y1 = 2
x2
???
y2 = 1
y4 = 1
y2 = 2
x3
...
x4
...
x4
x3
y4 = 2
x2
...
Figure 1: An example of a non-adaptive policy tree (left) and an adaptive policy tree (right).
the policy Gibbs error reduces to the Gibbs error for sets, which is a special case of a measure of
uncertainty called the Tsallis entropy [4].
By maximizing policy Gibbs error, we hope to maximize the policy entropy, whose maximality
implies the minimality of the posterior label entropy of the remaining unlabeled examples in the
pool. Besides, by maximizing policy Gibbs error, we also aim to obtain a small expected error of
a posterior Gibbs classifier (which samples a hypothesis from the posterior instead of the prior for
labeling). Small expected error of the posterior Gibbs classifier is desirable as it upper bounds the
Bayes error but is at most twice of it.
Maximizing policy Gibbs error is hard, and we propose a greedy criterion, the maximum Gibbs error
criterion (maxGEC), to solve it. With this criterion, the next query is made on the candidate (which
may be one or several examples) that has maximum Gibbs error, the probability that a randomly
sampled labeling does not match the actual labeling. We investigate this criterion in three settings:
the non-adaptive setting, the adaptive setting and batch setting (also called batch mode setting) [5].
In the non-adaptive setting, the set of examples is not labeled until all examples in the set have all
been selected. In the adaptive setting, the examples are labeled as soon as they are selected, and
the new information is used to select the next example. In the batch setting, we select a batch of
examples, query their labels and proceed to select the next batch taking into account the labels. In all
these settings, we prove that maxGEC is near-optimal compared to the best policy that has maximal
policy Gibbs error in the setting.
We examine how to compute the maxGEC criterion, particularly for large structured probabilistic
models such as the conditional random fields [6]. When inference in the conditional random field
can be done efficiently, we show how to compute an approximation to the Gibbs error by sampling
and efficient inference. We provide an approximation for maxGEC in the non-adaptive and batch
settings with Bayesian transductive Naive Bayes model. Finally, we conduct pool-based active
learning experiments using maxGEC for a named entity recognition task with conditional random
fields and a text classification task with Bayesian transductive Naive Bayes. The results show good
performance of maxGEC in terms of the area under the curve (AUC).
1
2
Preliminaries
Let X be a set of examples, Y be a fixed finite set of labels, and H be a set of probabilistic hypotheses. We assume H is finite, but our results extend readily to general H. For any probabilistic
hypothesis h ? H, its application to an example x ? X is a categorical random variable with support Y, and we write P[h(x) = y|h] for the probability that h(x) has value y ? Y. We extend the
notation to any sequence S of examples from X and write P[h(S) = y|h] for the probability that
h(S) has a labeling y ? Y |S| , where Y |S| is the set of all labelings of S. We operate within the
Bayesian setting and assume a prior probability p0 [h] on H. We use pD [h] to denote the posterior
p0 [h|D] after observing a set D of labeled examples from X ? Y.
A pool-based active learning algorithm is a policy for choosing training examples from a pool
X ? X . At the beginning, a fixed labeling y? of X is given by a hypothesis h drawn from the
prior p0 [h] and is hidden from the learner. Equivalently, y? can be drawn from the prior label
distribution p0 [y? ; X]. For any distribution p[h], we use p[y; S] to denote the probability that examples in P
S are assigned the labeling y by a hypothesis drawn randomly from p[h]. Formally,
p[y; S] def
= h?H p[h] P[h(S) = y|h]. When S is a singleton {x}, we write p[y; x] for p[{y}; {x}].
2
During the learning process, each time the learner selects an unlabeled example, its label will be
revealed to the learner. A policy for choosing training examples is a mapping from a set of labeled
examples to an unlabeled example to be queried. This can be represented by a policy tree, where
a node represents the next example to be queried, and each edge from the node corresponds to a
possible label. We use policy and policy tree as synonyms. Figure 1 illustrates two policy trees
with their top three levels: in the non-adaptive setting, the policy ignores the labels of the previously
selected examples, so all examples at the same depth of the policy tree are the same; in the adaptive
setting, the policy takes into account the observed labels when choosing the next example.
A full policy tree for a pool X is a policy tree of height |X|. A partial policy tree is a subtree of
a full policy tree with the same root. The class of policies of height k will be denoted by ?k . Our
query criterion gives a method to build a full policy tree one level at a time. The main building block
is the probability distribution p?0 [?] over all possible paths from the root to the leaves for any (full
or partial) policy tree ?. This distribution over paths is induced from the uncertainty in the fixed
labeling y? for X: since y? is drawn randomly from p0 [y? ; X], the path ? followed from the root
to a leaf of the policy tree during the execution of ? is also a random variable. If x? (resp. y? ) is the
sequence of examples (resp. labels) along path ?, then the probability of ? is p?0 [?] def
= p0 [y? ; x? ].
3
Maximum Gibbs Error Criterion for Active Learning
A commonly used objective for active learning in the non-adaptive setting is to choose k training
examples such that their Shannon entropy is maximal, as this reduces uncertainty in the later stage.
We first give a generalization of the concept of Shannon entropy to general (both adaptive and nonadaptive) policies. Formally, the policy entropy of a policy ? is
?
H(?) def
= E??p?0 [ ? ln p0 [?] ].
From this definition, policy entropy is the Shannon entropy of the paths in the policy. The policy
entropy reduces to the Shannon entropy on a set of examples when the policy is non-adaptive. The
following result gives a formal statement that maximizing policy entropy minimizes the uncertainty
on the label of the remaining unlabeled examples in the pool. Suppose a path ? has been observed,
the labels of the remaining examples in X \ x? follow the distribution p? [ ? ; X \ x? ], where p? is
the posterior obtained after observing (x? , y? ). The entropy of this distribution will be denoted by
G(?) and will
P be called the posterior label entropy of the remaining examples given ?. Formally,
G(?) = ? y p? [y; X \ x? ] ln p? [y; X \ x? ], where the summation is over all possible labelings y
of X \ x? . The posterior label entropy of a policy ? is defined as G(?) = E??p?0 G(?).
Theorem 1. For any k ? 1, if a policy ? in ?k maximizes H(?), then ? minimizes the posterior
label entropy G(?).
Proof. It can be easily verified that H(?) + G(?) is the Shannon entropy of the label distribution
p0 [ ? ; X], which is a constant (detailed proof is in the supplementary). Thus, the theorem follows.
The usual maximum Shannon entropy criterion, which selects the next example x maximizing
Ey?pD [y;x] [? ln pD [y; x]] where D is the previously observed labeled examples, can be thought of
as a greedy heuristic for building a policy ? maximizing H(?). However, it is still unknown whether
this greedy criterion has any theoretical guarantee, except for the non-adaptive case.
In this paper, we introduce a new objective for active learning: the policy Gibbs error. This new
objective is a lower bound of the policy entropy and there are near-optimal greedy algorithms to
optimize it. Intuitively, the policy Gibbs error of a policy ? is the expected probability for a Gibbs
classifier to make an error on the set adaptively selected by ?. Formally, we define the policy Gibbs
error of a policy ? as
?
V (?) def
(1)
= E??p?0 [ 1 ? p0 [?] ],
In the above equation, 1 ? p?0 [?] is the probability that a Gibbs classifier makes an error on the
selected set along the path ?. Theorem 2 below, which is straightforward from the inequality
x ? 1 + ln x, states that the policy Gibbs error is a lower bound of the policy entropy.
Theorem 2. For any (full or partial) policy ?, we have V (?) ? H(?).
3
Given a budget of k queries, our proposed objective is to find ? ? = arg max???k V (?), the height
k policy with maximum policy Gibbs error. By maximizing V (?), we hope to maximize the policy
entropy H(?), and thus minimize the uncertainty in the remaining examples. Furthermore, we
also hope to obtain a small expected error of a posterior Gibbs classifier, which upper bounds the
Bayes error but is at most twice of it. Using this objective, we propose greedy algorithms for
active learning that are provably near-optimal for probabilistic hypotheses. We will consider the
non-adaptive, adaptive and batch settings.
3.1
The Non-adaptive Setting
In the non-adaptive setting, the policy ? ignores the observed labels: it never updates the posterior.
This is equivalent to selecting a set of examples before any labeling is done. In this setting, the
examples selected along all paths of ? are the same. Let x? be the set of examples selected by ?.
The Gibbs error of a non-adaptive policy ? is simply
V (?) = Ey?p0 [ ? ;x? ] [1 ? p0 [y; x? ]].
Thus, the optimal non-adaptive
P policy selects a set S of examples maximizing its Gibbs error, which
is defined by pg0 (S) def
= 1 ? y p0 [y; S]2 .
P
In general, the Gibbs error of a distribution P is 1? i P [i]2 , where the summation is over elements
in the support of P . The Gibbs error is a special case of the Tsallis entropy used in nonextensive
statistical mechanics [4] and is known to be monotone submodular [7]. From the properties of
monotone submodular functions [8], the greedy non-adaptive policy that selects the next example
X
xi+1 = arg max{pg0 (Si ? {x})} = arg max{1 ?
p0 [y; Si ? {x}]2 },
(2)
x
x
y
where Si is the set of previously selected examples, is near-optimal compared to the best nonadaptive policy. This is stated below.
Theorem 3. Given a budget of k ? 1 queries, let ?n be the non-adaptive policy in ?k selecting
examples using Equation (2), and let ?n? be the non-adaptive policy in ?k with the maximum policy
Gibbs error. Then, V (?n ) > (1 ? 1/e)V (?n? ).
3.2
The Adaptive Setting
In the adaptive setting, a policy takes into account the observed labels when choosing the next
example. This is done via the posterior update after observing the label of a selected example.
The adaptive setting is the most common setting for active learning. We now describe a greedy
adaptive algorithm for this setting that is near-optimal. Assume that the current posterior obtained
after observing the labeled examples D is pD . Our greedy algorithm selects the next example x that
maximizes pgD (x):
X
x? = arg max pgD (x) = arg max{1 ?
pD [y; x]2 }.
(3)
x
x
y?Y
From the definition of pgD in Section 3.1, pgD (x) is in fact the Gibbs error
respect to the prior pD . Thus, we call this greedy criterion the adaptive
of a 1-step policy with
maximum Gibbs error
criterion (maxGEC). Note that in binary classification where |Y| = 2, maxGEC selects the same
example as the maximum Shannon entropy and the least confidence criteria. However, they are
different in the multi-class case. Theorem 4 below states that maxGEC is near-optimal compared to
the best adaptive policy with respect to the objective in Equation (1).
Theorem 4. Given a budget of k ? 1 queries, let ? maxGEC be the adaptive policy in ?k selecting
examples using maxGEC and ? ? be the adaptive policy in ?k with the maximum policy Gibbs error.
Then, V (? maxGEC ) > (1 ? 1/e)V (? ? ).
The proof for this theorem is in the supplementary material. The main idea of the proof is to reduce
probabilistic hypotheses to deterministic ones by expanding the hypothesis space. For deterministic
hypotheses, we show that maxGEC is equivalent to maximizing the version space reduction objective, which is known to be adaptive monotone submodular [2]. Thus, we can apply a known result
for optimizing adaptive monotone submodular function [2] to obtain Theorem 4.
4
Algorithm 1 Batch maxGEC for Bayesian Batch Active Learning
Input: Unlabeled pool X, prior p0 , number of iterations k, and batch size s.
for i = 0 to k ? 1 do
S??
for j = 0 to s ? 1 do
x? ? arg maxx pgi (S ? {x}); S ? S ? {x? }; X ? X \ {x? }
end for
yS ? Query-labels(S);
pi+1 ? Posterior-update(pi , S, yS )
end for
3.3
The Batch Setting
In the batch setting [5], we query the labels of s (instead of 1) examples each time, and we do
this for a given number of k iterations. After each iteration, we query the labeling of the selected
batch and update the posterior based on this labeling. The new posterior can be used to select the
next batch of examples. A non-adaptive policy can be seen as a batch policy that selects only one
batch. Algorithm 1 describes a greedy algorithm for this setting which we call the batch maxGEC
algorithm. At iteration i of the algorithm with the posterior pi , the batch S is first initialized to be
empty, then s examples are greedily chosen one at a time using the criterion
x? = arg max pgi (S ? {x}).
x
(4)
This is equivalent to running the non-adaptive greedy algorithm in Section 3.1 to select each batch.
Query-labels(S) returns the true labeling yS of S and Posterior-update(pi , S, yS ) returns the new
posterior obtained from the prior pi after observing yS .
The following theorem states that batch maxGEC is near optimal compared to the best batch policy
with respect to the objective in Equation (1). The proof for this theorem is in the supplementary
material. The proof also makes use of the reduction to deterministic hypotheses and the adaptive
submodularity of version space reduction.
Theorem 5. Given a budget of k batches of size s, let ?bmaxGEC be the batch policy selecting k
batches using batch maxGEC and ?b? be the batch policy selecting k batches with maximum policy
Gibbs error. Then, V (?bmaxGEC ) > (1 ? e?(e?1)/e )V (?b? ).
This theorem has a different bounding constant than those in Theorems 3 and 4 because it uses two
levels of approximation to compute the batch policy: at each iteration, it approximates the optimal
batch by greedily choosing one example at a time using equation (4) (1st approximation). Then it
uses these chosen batches to approximate the optimal batch policy (2nd approximation). In contrast,
the fully adaptive case has batch size 1 and only needs the 2nd approximation, while the non-adaptive
case chooses 1 batch and only needs the 1st approximation.
In non-adaptive and batch settings, our algorithms need to sum over all labelings of the previously
selected examples in a batch to choose the next example. This summation is usually expensive and
it restricts the algorithms to small batches. However, we note that small batches may be preferred in
some real problems. For example, if there is a small number of annotators and labeling one example
takes a long time, we may want to select a batch size that matches the number of annotators. In
this case, the annotators can label the examples concurrently while we can make use of the labels as
soon as they are available. It would take a longer time to label a larger batch and we cannot use the
labels until all the examples in the batch are labeled.
4
Computing maxGEC
We now discuss how to compute maxGEC and batch maxGEC for some probabilistic models. Computing the values is often difficult and we discuss some sampling methods for this task.
4.1 MaxGEC for Bayesian Conditional Exponential Models
A conditional exponential model defines the conditional P
probability P? [~y |~x] of a structured lam
bels ~y given a structured inputs ~x as P? [~y |~x] = exp ( i=1 ?i Fi (~y , ~x)) /Z? (~x), where ? =
5
Algorithm 2 Approximation for Equation (4).
Input: Selected unlabeled examples S, current unlabeled example x, current posterior pcD .
?1
c
Sample M label vectors (yi )M
i=0 of (X \ T ) ? T from pD using Gibbs sampling and set r ? 0.
for i = 0 to M ? 1 do
for y ? Y do
n
o
c [h(S) = yi ? h(x) = y] ? M ?1 yj | yj = yi ? yj
=
y
pc
S
S
S
D
{x}
i
2
c
r ? r + (pc
D [h(S) = yS ? h(x) = y])
end for
end for
return 1 ? r
(?1 , ?2 , . .P
. , ?m ) is the parameter vector, Fi (~y , ~x) is the total score of the i-th feature, and Z? (~x) =
P
m
y , ~x)) is the partition function. A well-known conditional exponential model
~
y exp (
i=1 ?i Fi (~
is the linear-chain conditional random field (CRF) [6] in which ~x and ~y both have sequence structures. That is, ~x = (x1 , x2 , . . . , x|~x| ) ? X |~x| and ~y = (y1 , y2 , . . . , y|~x| ) ? Y |~x| . In this model,
P|~x|
Fi (~y , ~x) = j=1 fi (yj , yj?1 , ~x) where fi (yj , yj?1 , ~x) is the score of the i-th feature at position j.
Qm
In the Bayesian setting, we assume a prior p0 [?] = i=1 p0 [?i ] on ?, where p0 [?i ] = N (?i |0, ? 2 )
for a known ?. After observing the labeled examples D = {(~xj , ~yj )}tj=1 , we can obtain the posterior
!
2 !
m
t
m
Y
X
1
1 X ?i
pD [?] = p0 [?|D] ?
exp
?i Fi (~yj , ~xj ) exp ?
.
Z (~x )
2 i=1 ?
j=1 ? j
i=1
For active learning, we need to estimate the Gibbs error in Equation (3) from
P the posterior pD . For each ~x, we can approximate the Gibbs error pgD (~x) = 1 ? ~y pD [~y ; ~x]2
by sampling N hypotheses ?1 , ?2 , . . . , ?N from the posterior pD .
In this case,
PN PN
pgD (~x) ? 1 ? N ?2 j=1 t=1 Z?j +?t (~x)/Z?j (~x)Z?t (~x). The derivation for this formula is in
the supplementary material. If we only use the MAP hypothesis ?? to approximate the Gibbs error
(i.e. the non-Bayesian setting), then N = 1 and pgD (~x) ? 1 ? Z2?? (~x)/Z?? (~x)2 .
This approximation can be done efficiently if we can compute the partition functions Z? (~x) efficiently for any ?. This condition holds for a wide range of models including logistic regression,
linear-chain CRF, semi-Markov CRF [9], and sparse high-order semi-Markov CRF [10].
4.2 Batch maxGEC for Bayesian Transductive Naive Bayes
We discuss an algorithm to approximate batch maxGEC for non-adaptive and batch active learning
with Bayesian transductive Naive Bayes. First, we describe the Bayesian transductive Naive Bayes
model for text classification. Let Y ? Y be a random variable denoting the label of a document
and W ? W be a random variable denoting a word. In a Naive Bayes model, the parameters are
? = {?y }y?Y ? {?w|y }w?W,y?Y , where ?y = P[Y = y] and ?w|y = P[W = w|Y = y]. For a
document X and a label Y , if X = {W1 , W2 , . . . , W|X| } where Wi is a word in the document, we
Q|X|
model the joint distribution P[X, Y ] = ?Y i=1 ?Wi |Y .
In the Bayesian setting, we have a prior p0 [?] such that ?y ? Dirichlet(?) and ?w|y ? Dirichlet(?y )
for each y. When we observe the labeled documents, we update the posterior by counting the labels
and the words in each document label. The posterior parameters also follow Dirichlet distributions.
Let X be the original pool of training examples and T be the unlabeled testing examples. In transductive setting, we work with the conditional prior pc0 [?] = p0 [?|X; T ]. For a set D = (T, yT ) of
labeled examples where T ? X is the set of unlabeled examples and yT is the labeling of T , the
conditional posterior is pcD [?] = p0 [?|X; T ; D] = pD [?|(X \ T ) ? T ], where pD [?] = p0 [?|D] is
the Dirichlet posterior of the non-transductive model. To implement the batch maxGEC algorithm,
we need to estimate the Gibbs error in Equation (4) from the conditional posterior. Let S be the
currently selected batch. For each unlabeled example x ?
/ S, we need to estimate:
#
"P
2
c
X
(p
[h(S)
=
y
?
h(x)
=
y])
S
D
y
2
,
1?
(pcD [h(S) = yS ? h(x) = y]) = 1 ? EyS
pcD [yS ; S]
y ,y
S
6
Table 1: AUC of different learning algorithms with batch size s = 10.
Task
TPass
maxGEC
LC
NPass
LogPass
LogFisher
alt.atheism/comp.graphics
talk.politics.guns/talk.politics.mideast
comp.sys.mac.hardware/comp.windows.x
rec.motorcycles/rec.sport.baseball
sci.crypt/sci.electronics
sci.space/soc.religion.christian
soc.religion.christian/talk.politics.guns
Average
87.43
84.92
73.17
93.82
60.46
92.38
91.57
83.39
91.69
92.03
93.60
96.40
85.51
95.83
95.94
93.00
91.66
92.16
92.27
96.23
85.86
95.45
95.59
92.75
84.98
80.80
74.41
92.33
60.85
89.72
85.56
81.24
91.63
86.07
85.87
89.46
82.89
91.16
90.35
88.21
93.92
88.36
88.71
93.90
87.72
94.04
93.96
91.52
where the expectation is with respect to the distribution pcD [yS ; S]. We can use Gibbs sampling
to approximate this expectation. First, we sample M label vectors y(X\T )?T of the remaining
unlabeled examples from pcD using Gibbs sampling. Then, for each yS , we estimate pcD [yS ; S] by
counting the fraction of the M sampled vectors consistent with yS . For each yS and y, we also
estimate pcD [h(S) = yS ? h(x) = y] by counting the fraction of the M sampled vectors consistent
with both yS and y on S ? {x}. This approximation is equivalent to Algorithm 2. In the algorithm,
ySi is the labeling of S according to yi .
5
5.1
Experiments
Named Entity Recognition (NER) with CRF
In this experiment, we consider the NER task with the Bayesian CRF model described in Section
4.1. We use a subset of the CoNLL 2003 NER task [11] which contains 1928 training and 969
test sentences. Following the setting in [12], we let the cost of querying the label sequence of each
sentence be 1. We implement two versions of maxGEC with the approximation algorithm in Section
4.1: the first version approximates Gibbs error by using only the MAP hypothesis (maxGEC-MAP)
and the second version approximates Gibbs error by using 50 hypotheses sampled from the posterior (maxGEC-50). We sample the hypotheses for maxGEC-50 from the posterior by MetropolisHastings algorithm with the MAP hypothesis as the initial point.
We compare the maxGEC algorithms with 4 other learning criteria: passive learner (Passive), active
learner which chooses the longest unlabeled sequence (Longest), active learner which chooses the
unlabeled sequence with maximum Shannon entropy (SegEnt), and active learner which chooses the
unlabeled sequence with the least confidence (LeastConf). For SegEnt and LeastConf, the entropy
and confidence are estimated from the MAP hypothesis. For all the algorithms, we use the MAP
hypothesis for Viterbi decoding. To our knowledge, there is no simple way to compute SegEnt or
LeastConf criteria from a finite sample of hypotheses except for using only the MAP estimation.
The difficulty is to compute a summation (minimization for LeastConf) over all the outputs ~y in the
complex structured models. For maxGEC, the summation can be rearranged to obtain the partition
functions, which can be computed efficiently using known inference algorithms. This is thus an
advantage of using maxGEC.
We compare the total area under the F1 curve (AUC) for each algorithm after querying the first 500
sentences. As a percentage of the maximum score of 500, algorithms Passive, Longest, SegEnt,
LeastConf, maxGEC-MAP and maxGEC-50 attain 72.8, 67.0, 75.4, 75.5, 75.8 and 76.0 respectively. Hence, the maxGEC algorithms perform better than all the other algorithms, and significantly
so over the Passive and Longest algorithms.
5.2
Text Classification with Bayesian Transductive Naive Bayes
In this experiment, we consider the text classification model in Section 4.2 with the meta-parameters
? = (0.1, . . . , 0.1) and ?y = (0.1, . . . , 0.1) for all y. We implement batch maxGEC (maxGEC)
with the approximation in Algorithm 2 and compare with 5 other algorithms: passive learner with
Bayesian transductive Naive Bayes model (TPass), least confidence active learner with Bayesian
transductive Naive Bayes model (LC), passive learner with Bayesian non-transductive Naive Bayes
model (NPass), passive learner with logistic regression model (LogPass), and batch active learner
7
with Fisher information matrix and logistic regression model (LogFisher) [5]. To implement the
least confidence algorithm, we sample M label vectors as in Algorithm 2 and use them to estimate
the label distribution for each unlabeled example. The algorithm will then select s examples whose
label is least confident according to these estimates.
We run the algorithms on 7 binary tasks from the 20Newsgroups dataset [13] with batch size s =
10, 20, 30 and report the areas under the accuracy curve (AUC) for the case s = 10 in Table 1. The
results for s = 20, 30 are in the supplementary material. The results are obtained by averaging over
5 different runs of the algorithms, and the AUCs are normalized so that their range is from 0 to
100. From the results, maxGEC obtains the best AUC scores on 4/7 tasks for each batch size and
also the best average AUC scores. LC also performs well and its scores are only slightly lower than
maxGEC. The passive learning algorithms are much worse than the active learning algorithms.
6
Related Work
Among pool-based active learning algorithms, greedy methods are the simplest and most common
[14]. Often, the greedy algorithms try to maximize the uncertainty, e.g. Shannon entropy, of the
example to be queried [12]. For non-adaptive active learning, greedy optimization of the Shannon
entropy guarantees near optimal performance due to the submodularity of the entropy [2]. However,
this has not been shown to extend to adaptive active learning, where each example is labeled as soon
as it is selected, and the labeled examples are exploited in selecting the next example to label.
Although greedy algorithms work well in practice [12, 14], they usually do not have any theoretical
guarantee except for the case where data are noiseless. In noiseless Bayesian setting, an algorithm
called generalized binary search was proven to be near-optimal: its expected number of queries is
within a factor of (ln minh1p0 [h] + 1) of the optimum, where p0 is the prior [2]. This result was obtained using the adaptive submodularity of the version space reduction. Adaptive submodularity is
an adaptive version of submodularity, a natural diminishing returns property. The adaptive submodularity of version space reduction was also applied to the batch setting to prove the near-optimality of
a batch greedy algorithm that maximizes the average version space reduction for each selected batch
[3]. The maxGEC and batch maxGEC algorithms that we proposed in this paper can be seen as generalizations of these version space reduction algorithms to the noisy setting. When the hypotheses
are deterministic, our algorithms are equivalent to these version space reduction algorithms.
For the case of noisy data, a noisy version of the generalized binary search was proposed [15]. The
algorithm was proven to be optimal under the neighborly condition, a very limited setting where
?each hypothesis is locally distinguishable from all others? [15]. In another work, Bayesian active
learning was modeled by the Equivalance Class Determination problem and a greedy algorithm
called EC2 was proposed for this problem [16]. Although the cost of EC2 is provably near-optimal,
this formulation requires an explicit noise model and the near-optimality bound is only useful when
the support of the noise model is small. Our formulation, in contrast, is simpler and does not require
an explicit noise model: the noise model is implicit in the probabilistic model and our algorithms
are only limited by computational concerns.
7
Conclusion
We considered a new objective function for Bayesian active learning: the policy Gibbs error. With
this objective, we described the maximum Gibbs error criterion for selecting the examples. The algorithm has near-optimality guarantees in the non-adaptive, adaptive and batch settings. We discussed
algorithms to approximate the Gibbs error criterion for Bayesian CRF and Bayesian transductive
Naive Bayes. We also showed that the criterion is useful for NER with CRF model and for text
classification with Bayesian transductive Naive Bayes model.
Acknowledgments
This work is supported by DSO grant DSOL11102 and the US Air Force Research Laboratory under
agreement number FA2386-12-1-4031.
8
References
[1] Andrew McCallum and Kamal Nigam. Employing EM and Pool-Based Active Learning for Text Classification. In International Conference on Machine Learning (ICML), pages 350?358, 1998.
[2] Daniel Golovin and Andreas Krause. Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization. Journal of Artificial Intelligence Research, 42(1):427?486, 2011.
[3] Yuxin Chen and Andreas Krause. Near-optimal Batch Mode Active Learning and Adaptive Submodular
Optimization. In International Conference on Machine Learning (ICML), pages 160?168, 2013.
[4] Constantino Tsallis and Edgardo Brigatti. Nonextensive statistical mechanics: A brief introduction. Continuum Mechanics and Thermodynamics, 16(3):223?235, 2004.
[5] Steven CH Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Batch Mode Active Learning and Its Application to Medical Image Classification. In International Conference on Machine learning (ICML), pages
417?424. ACM, 2006.
[6] John Lafferty, Andrew McCallum, and Fernando CN Pereira. Conditional Random Fields: Probabilistic
Models for Segmenting and Labeling Sequence Data. In International Conference on Machine Learning
(ICML), pages 282?289, 2001.
[7] Bassem Sayrafi, Dirk Van Gucht, and Marc Gyssens. The implication problem for measure-based constraints. Information Systems, 33(2):221?239, 2008.
[8] G.L. Nemhauser and L.A. Wolsey. Best Algorithms for Approximating the Maximum of a Submodular
Set Function. Mathematics of Operations Research, 3(3):177?188, 1978.
[9] Sunita Sarawagi and William W. Cohen. Semi-Markov Conditional Random Fields for Information Extraction. Advances in Neural Information Processing Systems (NIPS), 17:1185?1192, 2004.
[10] Viet Cuong Nguyen, Nan Ye, Wee Sun Lee, and Hai Leong Chieu. Semi-Markov Conditional Random
Field with High-Order Features. In ICML Workshop on Structured Sparsity: Learning and Inference,
2011.
[11] Erik F. Tjong Kim Sang and Fien De Meulder. Introduction to the CoNLL-2003 Shared Task: LanguageIndependent Named Entity Recognition. In Proceedings of the 17th Conference on Natural Language
Learning (HLT-NAACL 2003), pages 142?147, 2003.
[12] Burr Settles and Mark Craven. An Analysis of Active Learning Strategies for Sequence Labeling Tasks.
In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1070?1079. Association for Computational Linguistics, 2008.
[13] Thorsten Joachims. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. Technical report, DTIC Document, 1996.
[14] Burr Settles. Active Learning Literature Survey. Technical Report 1648, University of WisconsinMadison, 2009.
[15] Robert Nowak. Noisy Generalized Binary Search. Advances in Neural Information Processing Systems
(NIPS), 22:1366?1374, 2009.
[16] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-Optimal Bayesian Active Learning with Noisy
Observations. In Advances in Neural Information Processing Systems (NIPS), pages 766?774, 2010.
9
| 4958 |@word version:12 nd:2 p0:23 reduction:8 initial:1 electronics:1 contains:1 score:6 selecting:7 daniel:2 denoting:2 document:6 current:3 z2:1 si:3 readily:1 john:1 partition:3 christian:2 update:6 greedy:18 selected:18 leaf:2 intelligence:1 mccallum:2 beginning:1 sys:1 pc0:1 yuxin:1 node:2 org:1 simpler:1 height:3 along:3 prove:3 burr:2 ray:1 introduce:2 expected:8 examine:1 mechanic:3 multi:1 ming:1 actual:1 window:1 notation:1 maximizes:4 minimizes:2 guarantee:4 classifier:10 qm:1 grant:1 medical:1 segmenting:1 before:1 ner:4 path:8 twice:2 tsallis:3 limited:2 range:2 practical:1 acknowledgment:1 yj:9 testing:1 practice:1 block:1 implement:4 x3:2 sarawagi:1 area:3 empirical:1 npa:2 maxx:1 thought:1 attain:1 significantly:1 nonextensive:2 confidence:5 word:3 cannot:1 unlabeled:16 fa2386:1 optimize:1 equivalent:5 deterministic:7 map:8 yt:2 maximizing:9 straightforward:1 survey:1 formulate:1 resp:2 suppose:1 exact:1 us:2 hypothesis:25 agreement:1 element:1 recognition:4 particularly:1 expensive:1 rec:2 labeled:12 observed:5 steven:1 sun:2 pd:13 baseball:1 learner:12 easily:1 joint:1 represented:1 talk:3 derivation:1 effective:1 describe:2 query:12 artificial:1 labeling:17 choosing:5 whose:2 heuristic:1 supplementary:5 solve:1 larger:1 transductive:14 noisy:6 sequence:9 advantage:1 propose:3 lam:1 maximal:4 motorcycle:1 chai:1 empty:1 optimum:1 categorization:1 andrew:2 soc:2 coverage:1 implies:2 submodularity:7 stochastic:1 languageindependent:1 settle:2 material:4 hoi:1 require:1 f1:1 generalization:4 preliminary:1 tfidf:1 summation:5 rong:1 hold:1 considered:1 exp:4 mapping:3 viterbi:1 lyu:1 achieves:1 continuum:1 rocchio:1 estimation:1 label:42 currently:1 hope:3 minimization:1 concurrently:1 aim:3 rather:1 pn:2 eys:1 tjong:1 joachim:1 longest:4 mainly:1 contrast:2 greedily:2 kim:1 inference:4 diminishing:1 hidden:1 labelings:3 selects:7 provably:2 arg:7 classification:9 among:1 denoted:2 constrained:1 special:2 field:8 never:1 extraction:1 sampling:6 x4:3 represents:1 icml:5 kamal:1 report:3 others:1 few:1 randomly:3 sunita:1 wee:2 national:2 william:1 investigate:1 pc:2 tj:1 chain:2 implication:1 edge:1 nowak:1 partial:3 tree:13 conduct:1 initialized:1 theoretical:3 instance:2 asking:1 maximization:1 cost:2 mac:1 subset:1 graphic:1 chooses:4 adaptively:3 st:2 confident:1 international:4 ec2:2 minimality:1 lee:2 probabilistic:12 decoding:1 pool:15 michael:1 dso:3 w1:1 choose:2 emnlp:1 worse:1 return:4 sang:1 account:3 singleton:1 de:1 later:1 root:3 try:1 observing:6 bayes:15 minimize:1 air:1 accuracy:1 efficiently:4 bayesian:26 comp:4 hlt:1 definition:2 crypt:1 pgi:2 proof:6 sampled:5 dataset:1 knowledge:1 follow:2 wisconsinmadison:1 formulation:2 done:4 furthermore:1 stage:1 implicit:1 until:2 defines:1 mode:3 logistic:3 building:2 ye:2 concept:1 true:2 y2:5 normalized:1 naacl:1 hence:1 assigned:1 laboratory:2 deal:1 during:2 auc:7 criterion:24 generalized:3 crf:8 performs:1 passive:8 image:1 fi:7 common:2 cohen:1 extend:3 discussed:1 approximates:3 association:1 gibbs:55 queried:3 mathematics:1 submodular:6 language:2 longer:1 posterior:31 showed:1 optimizing:1 scenario:2 inequality:1 binary:5 meta:1 yi:4 exploited:1 seen:2 ey:2 maximize:3 fernando:1 semi:4 full:5 desirable:1 reduces:3 jianke:1 technical:2 match:2 determination:1 long:1 y:15 regression:3 noiseless:3 expectation:2 iteration:6 want:1 krause:3 w2:1 operate:1 induced:1 lafferty:1 call:2 near:16 counting:3 leong:2 revealed:1 enough:1 newsgroups:1 xj:2 reduce:1 idea:1 andreas:3 cn:1 politics:3 whether:1 proceed:1 useful:2 detailed:1 locally:1 hardware:1 rearranged:1 simplest:1 kian:1 percentage:1 restricts:1 singapore:2 yenan:1 estimated:1 write:3 drawn:6 verified:1 nonadaptive:2 monotone:4 fraction:2 sum:1 run:2 uncertainty:6 named:4 conll:2 def:5 bound:6 nan:2 followed:1 constraint:1 pcd:8 x2:4 optimality:3 department:1 leews:1 structured:5 according:2 craven:1 describes:1 slightly:1 em:1 wi:2 intuitively:1 thorsten:1 ln:5 equation:8 previously:4 discus:3 end:4 available:1 operation:1 apply:2 observe:1 batch:59 original:1 assumes:1 remaining:6 top:1 running:1 dirichlet:4 linguistics:1 build:1 approximating:1 objective:14 strategy:2 usual:1 hai:2 nemhauser:1 sci:3 entity:4 gun:2 erik:1 besides:1 modeled:1 y4:2 equivalently:1 difficult:1 robert:1 statement:1 stated:1 implementation:1 policy:84 unknown:1 perform:1 upper:2 observation:2 markov:4 finite:4 jin:1 dirk:1 y1:5 classifier1:1 pg0:2 sentence:3 bel:1 nu:1 nip:3 below:3 usually:2 sparsity:1 max:6 including:1 metropolishastings:1 difficulty:1 natural:3 force:1 zhu:1 thermodynamics:1 maximality:1 brief:1 categorical:1 naive:13 text:8 prior:15 sg:2 literature:1 fully:1 wolsey:1 querying:2 proven:2 annotator:3 consistent:2 constantino:1 pi:5 supported:1 meulder:1 soon:3 cuong:2 formal:1 viet:2 wide:1 taking:1 sparse:1 van:1 curve:3 depth:1 ignores:2 collection:1 adaptive:54 made:1 commonly:1 neighborly:1 nguyen:2 employing:1 debajyoti:1 approximate:6 obtains:1 preferred:1 active:36 xi:1 search:3 table:2 expanding:1 golovin:2 nigam:1 complex:1 marc:1 main:2 synonym:1 bounding:1 noise:4 atheism:1 x1:3 lc:3 position:1 pereira:1 explicit:2 exponential:3 candidate:1 mideast:1 theorem:14 formula:1 alt:1 concern:1 workshop:1 execution:1 subtree:1 budget:7 illustrates:1 dtic:1 chen:1 entropy:29 distinguishable:1 simply:1 ysi:1 religion:2 sport:1 chieu:2 ch:1 corresponds:1 acm:1 conditional:15 shared:1 fisher:1 hard:2 except:3 pgd:7 averaging:1 called:7 total:2 experimental:1 shannon:11 select:9 formally:4 support:3 mark:1 |
4,374 | 4,959 | Marginals-to-Models Reducibility
Michael Kearns
University of Pennsylvania
[email protected]
Tim Roughgarden
Stanford University
[email protected]
Abstract
We consider a number of classical and new computational problems regarding
marginal distributions, and inference in models specifying a full joint distribution.
We prove general and efficient reductions between a number of these problems,
which demonstrate that algorithmic progress in inference automatically yields
progress for ?pure data? problems. Our main technique involves formulating the
problems as linear programs, and proving that the dual separation oracle required
by the ellipsoid method is provided by the target problem. This technique may be
of independent interest in probabilistic inference.
1
Introduction
The movement between the specification of ?local? marginals and models for complete joint distributions is ingrained in the language and methods of modern probabilistic inference. For instance,
in Bayesian networks, we begin with a (perhaps partial) specification of local marginals or CPTs,
which then allows us to construct a graphical model for the full joint distribution. In turn, this allows
us to make inferences (perhaps conditioned on observed evidence) regarding marginals that were not
part of the original specification.
In many applications, the specification of marginals is derived from some combination of (noisy)
observed data and (imperfect) domain expertise. As such, even before the passage to models for the
full joint distribution, there are a number of basic computational questions we might wish to ask of
given marginals, such as whether they are consistent with any joint distribution, and if not, what the
nearest consistent marginals are. These can be viewed as questions about the ?data?, as opposed to
inferences made in models derived from the data.
In this paper, we prove a number of general, polynomial time reductions between such problems
regarding data or marginals, and problems of inference in graphical models. By ?general? we mean
the reductions are not restricted to particular classes of graphs or algorithmic approaches, but show
that any computational progress on the target problem immediately transfers to progress on the
source problem. For example, one of our main results establishes that the problem of determining
whether given marginals, whose induced graph (the ?data graph?) falls within some class G, are
consistent with any joint distribution reduces to the problem of MAP inference in Markov networks
falling in the same class G. Thus, for instance, we immediately obtain that the tractability of MAP
inference in trees or tree-like graphs yields an efficient algorithm for marginal consistency in tree
data graphs; and any future progress in MAP inference for other classes G will similarly transfer.
Conversely, our reductions also can be used to establish negative results. For instance, for any class
G for which we can prove the intractability of marginal consistency, we can immediately infer the
intractability of MAP inference as well.
There are a number of reasons to be interested in such problems regarding marginals. One, as
we have already suggested, is the fact that given marginals may not be consistent with any joint
1
Figure 1: Summary of main results. Arrows indicate that the source problem can be reduced to the target
problem for any class of graphs G, and in polynomial time. Our main results are the left-to-right arrows from
marginals-based problems to Markov net inference problems.
distribution, due to noisy observations or faulty domain intuitions,1 and we may wish to know this
before simply passing to a joint model that forces or assumes consistency. At the other extreme,
given marginals may be consistent with many joint distributions, with potentially very different
properties.2 Rather than simply selecting one of these consistent distributions in which to perform
inference (as would typically happen in the construction of a Markov or Bayes net), we may wish to
reason over the entire class of consistent distributions, or optimize over it (for instance, choosing to
maximize or minimize independence).
We thus consider four natural algorithmic problems involving (partially) specified marginals:
? C ONSISTENCY: Is there any joint distribution consistent with given marginals?
? C LOSEST C ONSISTENCY: What are the consistent marginals closest to given inconsistent
marginals?
? S MALL S UPPORT: Of the consistent distributions with the closest marginals, can we compute one with support size polynomial in the data (i.e., number of given marginal values)?
? M AX E NTROPY: What is the maximum entropy distribution closest to given marginals?
The consistency problem has been studied before as the membership problem for the marginal polytope (see Related Work); in the case of inconsistency, the closest consistency problem seeks the
minimal perturbation to the data necessary to recover coherence.
When there are many consistent distributions, which one should be singled out? While the maximum entropy distribution is a staple of probabilistic inference, it is not the only interesting answer.
For example, consider the three features ?votes Republican?, ?supports universal healthcare?, and
?supports tougher gun control?, and suppose the single marginals are 0.5, 0.5, 0.5. The maximum
entropy distribution is uniform over the 8 possibilities. We might expect reality to hew closer to a
small support distribution, perhaps even 50/50 over the two vectors 100 and 011. The small support
problem can be informally viewed as attempting to minimize independence or randomization, and
thus is a natural contrast to maximum entropy. It is also worth noting that small support distributions
arise naturally through the joint behavior of no-regret algorithms in game-theoretic settings [1].
We also consider two standard algorithmic inference problems on full joint distributions (models):
1
For a simple example, consider three random variables for which each pairwise marginal specifies that the
settings (0,1) and (1,0) each occurs with probability 1/2. The corresponding ?data graph? is a triangle. This
requires that each variable always disagrees with the other two, which is impossible.
2
For example, consider random variables X, Y, Z. Suppose the pairwise marginals for X and Y and for Y
and Z specify that all four binary settings are equally likely. No pairwise marginals for X and Z are given, so
the data graph is a two-hop path. One consistent distribution flips a fair coin independently for each variable;
but another flips one coin for X, a second for Y , and sets Z = X. The former maximizes entropy while the
latter minimizes support size.
2
? MAP I NFERENCE: What is the MAP joint assignment in a given Markov network?
? G ENERALIZED PARTITION: What is the normalizing constant of a given Markov network,
possibly after conditioning on the value of one vertex or edge?
All six of these problems are parameterized by a class of graphs G ? for the four marginals problems, this is the graph induced by the given pairwise marginals, while for the models problems, it is
the graph of the given Markov network. All of our reductions are of the form ?for every class G, if
there is a polynomial-time algorithm for solving inference problem B for (model) graphs in G, then
there is a polynomial-time algorithm for marginals problem A for (marginal) graphs in G? ? that
is, A reduces to B. Our main results, which are summarized in Figure 1, can be stated informally as
follows:
? C ONSISTENCY reduces to MAP I NFERENCE.
? C LOSEST C ONSISTENCY reduces to MAP I NFERENCE.
? S MALL S UPPORT reduces to MAP I NFERENCE.
? M AX E NTROPY reduces to G ENERALIZED PARTITION.3
While connections between some of these problems are known for specific classes of graphs ?
most notably in trees, where all of these problems are tractable and rely on common underlying algorithmic approaches such as dynamic programming ? the novelty of our results is their generality,
showing that the above reductions hold for every class of graphs.
All of our reductions share a common and powerful technique: the use of the ellipsoid method
for Linear Programming (LP), with the key step being the articulation of an appropriate separation
oracle. The first three problems we consider have a straightforward LP formulation which will
typically have a number of variables that is equal to the number of joint settings, and therefore
exponential in the number of variables; for the M AX E NTROPY problem, there is an analogous
convex program formulation. Since our goal is to run in time polynomial in the input length (the
number and size of given marginals), the straightforward LP formulation will not suffice. However,
by passing to the dual LP, we instead obtain an LP with only a polynomial number of variables, but
an exponential number of constraints that can be represented implicitly. For each of the reductions
above, we show that the required separation oracle for these implicit constraints is provided exactly
by the corresponding inference problem (MAP I NFERENCE or G ENERALIZED PARTITION). We
believe this technique may be of independent interest and have other applications in probabilistic
inference.
It is perhaps surprising that in the study of problems strictly addressing properties of given marginals
(which have received relatively little attention in the graphical models literature historically), problems of inference in full joint models (which have received great attention) should arise so naturally
and generally. For the marginal problems, our reductions (via the ellipsoid method) effectively
create a series of ?fictitious? Markov networks such that the solutions to corresponding inference
problems (MAP I NFERENCE and G ENERALIZED PARTITION) indirectly lead to a solution to the
original marginal problems.
Related Work: The literature on graphical models and probabilistic inference is rife with connections between some of the problems we study here for specific classes of graphical models (such as
trees or otherwise sparse structures), and under specific algorithmic approaches (such as dynamic
programming or message-passing algorithms more generally, and various forms of variational inference); see [2, 3, 4] for good overviews. In contrast, here we develop general and efficient reductions
between marginal and inference problems that hold regardless of the graph structure or algorithmic
approach; we are not aware of prior efforts in this vein. Some of the problems we consider are also
either new or have been studied very little, such as C LOSEST C ONSISTENCY and S MALL S UPPORT.
The C ONSISTENCY problem has been studied before as the membership problem for the marginal
polytope. In particular, [8] shows that finding the MAP assignment for Markov random fields with
pairwise potentials can be cast as an integer linear program over the marginal polytope ? that is,
algorithms for the C ONSISTENCY problem are useful subroutines for inference. Our work is the
3
The conceptual ideas in this reduction are well known. We include a formal treatment in the Appendix for
completeness and to provide an analogy with our other reductions, which are our more novel contributions.
3
first to show a converse, that inference algorithms are useful subroutines for decision and optimization problems for the marginal polytope. Furthermore, previous polynomial-time solutions to the
C ONSISTENCY problem generally give a compact (polynomial-size) description of the marginal
polytope. Our approach dodges this ambitious requirement, in that it only needs a polynomial-time
separation oracle (which, for this problem, turns out to be MAP inference). As there are many
combinatorial optimization problems with no compact LP formulation that admit polynomial-time
ellipsoid-based algorithms ? like non-bipartite matching, with its exponentially many odd cycle
inequalities ? our approach provides a new way of identifying computationally tractable special
cases of problems concerning marginals.
The previous work that is perhaps most closely related in spirit to our interests are [5] and [6, 7].
These works provide reductions of some form, but not ones that are both general (independent of
graph structure) and polynomial time. However, they do suggest both the possibility and interest in
such stronger reductions. The paper [5] discusses and provides heuristic reductions between MAP
I NFERENCE and G ENERALIZED PARTITION.
The work in [6, 7] makes the point that maximizing entropy subject to an (approximate) consistency
condition yields a distribution that can be represented as a Markov network over the graph induced
by the original data or marginals. As far as we are aware, however, there has been essentially no formal complexity analysis (i.e., worst-case polynomial-time guarantees) for algorithms that compute
max-entropy distributions.4
2
Preliminaries
2.1
Problem Definitions
For clarity of exposition, we focus on the pairwise case in which every marginal involves at most
two variables.5 Denote the underlying random variables by X1 , . . . , Xn , which we assume have
range [k] = {0, 1, 2, . . . , k}. The input is at most one real-valued single marginal value ?is for
every variable i ? [n] and value s ? [k], and at most one real-valued pairwise marginal value ?ijst
for every ordered variable pair i, j ? [n]?[n] with i < j and every pair s, t ? [k]. Note that we allow
a marginal to be only partially specified. The data graph induced by a set of marginals has one vertex
per random variable Xi , and an undirected edge (i, j) if and only if at least one of the given pairwise
marginal values involves the variables Xi and Xj . Let M1 and M2 denote the sets of indices (i, s)
and (i, j, s, t) of the given single and pairwise marginal values, and m = |M1 | + |M2 | the total
number of marginal values. Let A = [k]n denote the space of all possible variable assignments.
We say that the given marginals ? are consistent if there exists a (joint) probability distribution
consistent with all of them (i.e., that induces the marginals ?).
With these basic definitions, we can now give formal definitions for the marginals problems we
consider. Let G denote an arbitrary class of undirected graphs.
? C ONSISTENCY (G): Given marginals ? such that the induced data graph falls in G, are they
consistent?
? C LOSEST C ONSISTENCY (G): Given (possibly inconsistent) marginals ? such that the
induced data graph falls in G, compute the consistent marginals ? minimizing ||? ? ?||1 .
? S MALL S UPPORT (G): Given (consistent or inconsistent) marginals ? such that the induced data graph falls in G, compute a distribution that has a polynomial-size support and
marginals ? that minimize ||? ? ?||1 .
? M AX E NTROPY (G): Given (consistent or inconsistent) marginals ? such that the induced
data graph falls in G, compute the maximum entropy distribution that has marginals ? that
minimize ||? ? ?||1 .
4
There are two challenges to doing this. The first, which has been addressed in previous work, is to circumvent the exponential number of decision variables via a separation oracle. The second, which does not seem to
have been previously addressed, is to bound the diameter of the search space (i.e., the magnitude of the optimal
Lagrange variables). Proving this requires using special properties of the M AX E NTROPY problem, beyond
mere convexity. We adapt recent techniques of [13] to provide the necessary argument.
5
All of our results generalize to the case of higher-order marginals in a straightforward manner.
4
It is important to emphasize that all of the problems above are ?model-free?, in that we do not
assume that the marginals are consistent with, or generated by, any particular model (such as a
Markov network). They are simply given marginals, or ?data?.
For each of these problems, our interest is in algorithms whose running time is polynomial in the
size of the input ?. The prospects for this depend strongly on the class G, with tractability generally
following for ?nice? classes such as tree or tree-like graphs, and intractability for the most general
cases. Our contribution is in showing a strong connection between tractability for these marginals
problems and the following inference problems for any class G.
? MAP I NFERENCE (G): Given a Markov network whose graph falls in G, find the maximum
a posteriori (MAP) or most probable joint assignment.6
? G ENERALIZED PARTITION: Given a Markov network whose graph falls in G, compute
the partition function or normalization constant for the full joint distribution, possibly after
conditioning on the value of a single vertex or edge.7
2.2
The Ellipsoid Method for Linear Programming
Our algorithms for the C ONSISTENCY, C LOSEST C ONSISTENCY, and S MALL S UPPORT problems
use linear programming. There are a number of algorithms that solve explictly described linear
programs in time polynomial in the description size. Our problems, however, pose an additional
challenge: the obvious linear programming formulation has size exponential in the parameters of
interest. To address this challenge, we turn to the ellipsoid method [9], which can solve in polynomial time linear programs that have an exponential number of implicitly described constraints,
provided there is a polynomial-time ?separation oracle? for these constraints. The ellipsoid method
is discussed exhaustively in [10, 11]; we record in this section the facts necessary for our results.
Definition 2.1 (Separation Oracle) Let P = {x ? Rn : aT1 x ? b1 , . . . , aTm x ? bm } denote the
feasible region of m linear constraints in n dimensions. A separation oracle for P is an algorithm
that takes as input a vector x ? Rn , and either (i) verifies that x ? P; or (ii) returns a constraint i
such that ati x > bi . A polynomial-time separation oracle runs in time polynomial in n, the maximum
description length of a single constraint, and the description length of the input x.
One obvious separation oracle is to simply check, given a candidate solution x, each of the m
constraints in turn. More interesting and relevant are constraint sets that have size exponential in the
dimension n but admit a polynomial-time separation oracle.
Theorem 2.2 (Convergence Guarantee of the Ellipsoid Method [9]) Suppose the set P = {x ?
Rn : aT1 x ? b1 , . . . , aTm x ? bm } admits a polynomial-time separation oracle and cT x is a linear
objective function. Then, the ellipsoid method solves the optimization problem {max cT x : x ? P}
in time polynomial in n and the maximum description length of a single constraint or objective
function. The method correctly detects if P = ?. Moreover, if P is non-empty and bounded, the
ellipsoid method returns a vertex of P.8
Theorem 2.2 provides a general reduction from a problem to an intuitively easier one: if the problem
of verifying membership in P can be solved in polynomial time, then the problem of optimizing an
arbitrary linear function over P can also be solved in polynomial time. This reduction is ?many-toone,? meaning that the ellipsoid method invokes the separation oracle for P a large (but polynomial)
number of times, each with a different candidate point x. See Appendix A.1 for a high-level description of the ellipsoid method and [10, 11] for a detailed treatment.
The ellipsoid method also applies to convex programming problems under some additional technical conditions. This is discussed in Appendix A.2 and applied to the M AX E NTROPY problem in
Appendix A.3.
6
Formally, the input is a graph G = (V, E) with a log-potential log ?i (s) and log ?ij (s, t) for each vertex
i ? V and edge (i, j) ? E, and each value
Q s ? [k] =
Q{0, 1, 2 . . . , k} and pair s, t ? [k] ? [k] of values. The
MAP assignment maximizes P (a) := i?V ?i (ai ) (i,j)?E ?ij (ai , aj ) over all assignments a ? [k]V .
P
P
7
Formally, given the log-potentials of a Markov network, compute a?[k]n P (a); a : ai =s P (a) for a
P
given i, s; or a : ai =s,aj =t P (a) for a given i, j, s, t.
8
A vertex is a point of P that satisfies with equality n linearly independent constraints.
5
3
C ONSISTENCY Reduces to MAP I NFERENCE
The goal of this section is to reduce the C ONSISTENCY problem for data graphs in the family G to
the MAP I NFERENCE problem for networks in G.
Theorem 3.1 (Main Result 1) Let G be a set of graphs. If the the MAP I NFERENCE (G) problem
can be solved in polynomial time, then the C ONSISTENCY (G) problem can be solved in polynomial
time.
We begin with a straightforward linear programming formulation of the C ONSISTENCY problem.
Lemma 3.2 (Linear Programming Formulation) An instance of the C ONSISTENCY problem admits a consistent distribution if and only if the following linear program (P) has a solution:
(P ) max
0
p
subject to:
P
a?A:ai =s
P
a?A:ai =s,aj =t
P
for all (i, s) ? M1
pa = ?is
pa = ?ijst
pa = 1
pa ? 0
for all (i, j, s, t) ? M2
a?A
for all a ? A.
Solving (P) using the ellipsoid method (Theorem 2.2), or any other linear programming method,
requires time at least |A| = (k +1)n , the number of decision variables. This is generally exponential
in the size of the input, which is proportional to the number m of given marginal values.
A ray of hope is provided by the fact that the number of constraints of the linear program in
Lemma 3.2 is equal to the number of marginal values. With an eye toward applying the ellipsoid
method (Theorem 2.2), we consider the dual linear program. We use the following notation. Given
a vector y indexed by M1 ? M2 , we define
X
X
y(a) =
yis +
yijst
(1)
(i,s)?M1 : ai =s
(i,j,s,t)?M2 : ai =s,aj =t
for each assignment a ? A, and
?T y =
X
X
?is yis +
(i,s)?M1
?ijst yijst .
(2)
(i,j,s,t)?M2
Strong linear programming duality implies the following.
Lemma 3.3 (Dual Linear Programming Formulation) An instance of the C ONSISTENCY problem admits a consistent distribution if and only if the optimal value of the following linear program (D) is 0:
(D) max
y,z
?T y + z
subject to:
y(a) + z ? 0
y, z unrestricted.
for all a ? A
The number of variables in (D) ? one per constraint of the primal linear program ? is polynomial
in the size of the C ONSISTENCY input.
What use is the MAP I NFERENCE problem for solving the C ONSISTENCY problem? The next
lemma forges the connection.
Lemma 3.4 (Map Inference as a Separation Oracle) Let G be a set of graphs and suppose that
the MAP I NFERENCE (G) problem can be solved in polynomial time. Consider an instance of the
C ONSISTENCY problem with a data graph in G, and a candidate solution y, z to the corresponding
6
dual linear program (D). Then, there is a polynomial-time algorithm that checks whether or not
there is an assignment a ? A that satisfies
X
X
yis +
yijst > ?z,
(3)
(i,s)?M1 : ai =s
(i,j,s,t)?M2 : ai =s,aj =t
and produces such an assignment if one exists.
Proof: The key idea is to interpret y as the log-potentials of a Markov network. Precisely, construct a
Markov network N as follows. The vertex set V and edge set E correspond to the random variables
and edge set of the data graph of the C ONSISTENCY instance. The potential function at a vertex i
is defined as ?i (s) = exp{yis } for each value s ? [k]. The potential function at an edge (i, j)
is defined as ?ij (s, t) = exp{yijst } for (s, t) ? [k] ? [k]. For a missing pair (i, s) ?
/ M1 or 4tuple (i, j, s, t) ?
/ M2 , we define the corresponding potential value ?i (s) or ?ij (st) to be 1. The
underlying graph of N is the same as the data graph of the given C ONSISTENCY instance and hence
is a member of G.
In the distribution induced by N , the probability of an assignment a ? [k]n is, by definition, proportional to
?
??
?
Y
Y
?
exp{yiai }? ?
exp{yijai aj }? = exp{y(a)}.
i?V : (i,ai )?M1
(i,j)?E : (i,j,ai ,aj )?M2
That is, the MAP assignment for the Markov network N is the assignment that maximizes the lefthand size of (3).
Checking if some assignment a ? A satisfies (3) can thus be implemented as follows: compute the
MAP assignment a? for N ? by assumption, and since the graph of N lies in G, this can be done
in polynomial time; return a? if it satisfies (3), and otherwise conclude that no assignment a ? A
satisfies (3).
All of the ingredients for the proof of Theorem 3.1 are now in place.
Proof of Theorem 3.1: Assume that there is a polynomial-time algorithm for the MAP I NFERENCE
(G) problem with the family G of graphs, and consider an instance of the C ONSISTENCY problem
with data graph G ? G. Deciding whether or not this instance has a consistent distribution is equivalent to solving the program (D) in Lemma 3.3. By Theorem 2.2, the ellipsoid method can be used
to solve (D) in polynomial time, provided the constraint set admits a polynomial-time separation
oracle. Lemma 3.4 shows that the relevant separation oracle is equivalent to computing the MAP
assignment of a Markov network with graph G ? G. By assumption, the latter problem can be
solved in polynomial time.
We defined the C ONSISTENCY problem as a decision problem, where the answer is ?yes? or no.?
For instances that admit a consistent distribution, we can also ask for a succinct representation of a
distribution that witnesses the marginals? consistency. We next strengthen Theorem 3.1 by showing
that for consistent instances, under the same hypothesis, we can compute a small-support consistent
distribution in polynomial time. See Figure 2 for the high-level description of the algorithm.
Theorem 3.5 (Small-Support Witnesses) Let G be a set of graphs. If the MAP I NFERENCE (G)
problem can be solved in polynomial time, then for every consistent instance of the C ONSISTENCY
(G) problem with m = |M1 | + |M2 | marginal values, a consistent distribution with support size at
most m + 1 can be computed in polynomial time.
Proof: Consider a consistent instance of C ONSISTENCY with data graph G ? G. The algorithm
of Theorem 3.1 concludes by solving the dual linear program of Lemma 3.3 using the ellipsoid
method. This method runs for a polynomial number K of iterations, and each iteration generates
one new inequality. At termination, the algorithm has identified a ?reduced dual linear program?, in
which a set of only K out of the original (k + 1)n constraints is sufficient to prove the optimality of
its solution. By strong duality, the corresponding ?reduced primal linear program,? obtained from
the linear program in Lemma 3.2 by retaining only the decision variables corresponding to the K
7
1. Solve the dual linear program (D) (Lemma 3.3) using the ellipsoid method (Theorem 2.2),
using the given polynomial-time algorithm for MAP I NFERENCE (G) to implement the
ellipsoid separation oracle (see Lemma 3.4).
2. If the dual (D) has a nonzero (and hence, unbounded) optimal objective function value,
then report ?no consistent distributions? and halt.
3. Explicitly form the reduced primal linear program (P-red), obtained from (P) by retaining
only the variables that correspond to the dual inequalities generated by the separation oracle
in Step 1.
4. Solve (P-red) using a polynomial-time linear programming algorithm that returns a vertex
solution, and return the result.
Figure 2: High-level description of the polynomial-time reduction from C ONSISTENCY (G) to MAP I NFER ENCE (G) (Steps 1 and 2) and postprocessing to extract a small-support distribution that witnesses consistent
marginals (Steps 3 and 4).
reduced dual constraints, has optimal objective function value 0. In particular, this reduced primal
linear program is feasible.
The reduced primal linear program has a polynomial number of variables and constraints, so it can be
solved by the ellipsoid method (or any other polynomial-time method) to obtain a feasible point p.
The point p is an explicit description of a consistent distribution with support size at most K. To
improve the support size upper bound from K to m + 1, recall from Theorem 2.2 that p is a vertex
of the feasible region, meaning it satisfies K linearly independent constraints of the reduced primal
linear program with equality. This linear program has P
at most one constraint for each of the m given
marginal values, at most one normalization constraint a?A pa = 1, and non-negativity constraints.
Thus, at least K?m?1 of the constraints that p satisfies with equality are non-negativity constraints.
Equivalently, it has at most m + 1 strictly positive entries.
4
C LOSEST C ONSISTENCY, S MALL S UPPORT Reduce to MAP I NFERENCE
This section considers the C LOSEST C ONSISTENCY and S MALL S UPPORT problems. The input
to these problems is the same as in the C ONSISTENCY problem ? single marginal values ?is for
(i, s) ? M1 and pairwise marginal values ?ijst for (i, j, s, t) ? M2 . The goal is to compute sets
of marginals {?is }M1 and {?ijst }M2 that are consistent and, subject to this constraint, minimize the
`1 norm ||? ? ?||1 with respect to the given marginals. An algorithm for the C LOSEST C ONSIS TENCY problem solves the C ONSISTENCY problem as a special case, since a given set of marginals
is consistent if and only if the corresponding C LOSEST C ONSISTENCY problem has optimal objective function value 0. Despite this greater generality, the C LOSEST C ONSISTENCY problem also
reduces in polynomial time to the MAP I NFERENCE problem, as does the still more general S MALL
S UPPORT problem.
Theorem 4.1 (Main Result 2) Let G be a set of graphs. If the MAP I NFERENCE (G) problem
can be solved in polynomial time, then the C LOSEST C ONSISTENCY (G) problem can be solved in
polynomial time. Moreover, a distribution consistent with the optimal marginals with support size
at most 3m + 1 can be computed in polynomial time, where m = |M1 | + |M2 | denotes the number
of marginal values.
The formulation of the C LOSEST C ONSISTENCY (G) problem has linear constraints ? the same
as those in Lemma 3.2, except with the given marginals ? replaced by the computed consistent
marginals ? ? but a nonlinear objective function ||? ? ?||1 . We can simulate the absolute value
functions in the objective by adding a small number of variables and constraints. We provide details
and the proof of Theorem 4.1 in Appendix A.4.
8
References
[1] Nicolo Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[2] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, 2009.
[3] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1), 2008.
[4] S. Lauritzen. Graphical Models. Oxford University Press, 1996.
[5] T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. Proceedings of the 29th International Conference on Machine Learning, 2012.
[6] J. K. Johnson, V. Chandrasekaran, and A. S. Willsky. Learning markov structure by maximum
entropy relaxation. In 11th International Conference in Artificial Intelligence and Statistics
(AISTATS 2007), 2007.
[7] V. Chandrasekaran, J. K. Johnson, and A. S. Willsky. Maximum entropy relaxation for graphical model selection given inconsistent statistics. In IEEE Statistical Signal Processing Workshop (SSP 2007), 2007.
[8] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In Neural Information
Processing Systems (NIPS), 2007.
[9] L. G. Khachiyan. A polynomial algorithm in linear programming. Soviet Mathematics Doklady, 20(1):191?194, 1979.
[10] A. Ben-Tal and A. Nemirovski. Optimization iii. Lecture notes, 2012.
[11] M. Gr?otschel, L. Lov?asz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Springer, 1988. Second Edition, 1993.
[12] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge, 2004.
[13] M. Singh and N. Vishnoi. Entropy, optimization and counting. arXiv, (1304.8108), 2013.
9
| 4959 |@word polynomial:50 stronger:1 norm:1 termination:1 seek:1 reduction:18 mkearns:1 series:1 selecting:1 ati:1 surprising:1 partition:8 happen:1 intelligence:1 record:1 completeness:1 provides:3 unbounded:1 khachiyan:1 prove:4 rife:1 ray:1 manner:1 pairwise:10 lov:1 notably:1 upenn:1 behavior:1 detects:1 automatically:1 little:2 provided:5 begin:2 underlying:3 suffice:1 maximizes:3 moreover:2 bounded:1 notation:1 what:6 minimizes:1 finding:1 guarantee:2 every:7 exactly:1 doklady:1 healthcare:1 control:1 converse:1 before:4 positive:1 local:2 despite:1 oxford:1 path:1 lugosi:1 might:2 studied:3 specifying:1 conversely:1 nemirovski:1 range:1 bi:1 regret:1 implement:1 universal:1 matching:1 boyd:1 staple:1 suggest:1 selection:1 faulty:1 impossible:1 applying:1 optimize:1 equivalent:2 map:33 missing:1 maximizing:1 straightforward:4 attention:2 regardless:1 independently:1 convex:3 identifying:1 immediately:3 pure:1 explictly:1 m2:13 vandenberghe:1 proving:2 analogous:1 target:3 construction:1 suppose:4 strengthen:1 programming:14 hypothesis:1 pa:5 trend:1 vein:1 observed:2 solved:10 verifying:1 worst:1 region:2 cycle:1 movement:1 prospect:1 intuition:1 convexity:1 complexity:1 dynamic:2 exhaustively:1 depend:1 solving:5 singh:1 dodge:1 bipartite:1 triangle:1 joint:18 represented:2 various:1 soviet:1 artificial:1 choosing:1 whose:4 heuristic:1 stanford:2 valued:2 solve:5 say:1 otherwise:2 statistic:2 noisy:2 singled:1 net:2 relevant:2 lefthand:1 description:9 convergence:1 empty:1 requirement:1 produce:1 ben:1 tim:2 develop:1 pose:1 ij:4 lauritzen:1 nearest:1 odd:1 received:2 solves:2 progress:5 implemented:1 c:1 involves:3 indicate:1 implies:1 strong:3 closely:1 preliminary:1 randomization:1 probable:1 strictly:2 hold:2 exp:5 great:1 deciding:1 algorithmic:7 combinatorial:2 tency:1 create:1 establishes:1 hope:1 mit:1 always:1 rather:1 jaakkola:2 derived:2 ax:6 focus:1 check:2 contrast:2 posteriori:2 inference:29 membership:3 typically:2 entire:1 abor:1 koller:1 subroutine:2 interested:1 dual:11 retaining:2 special:3 marginal:29 equal:2 construct:2 aware:2 field:1 hop:1 future:1 report:1 modern:1 replaced:1 friedman:1 interest:6 message:1 possibility:2 ijst:5 extreme:1 primal:6 edge:7 closer:1 partial:1 necessary:3 nference:19 tuple:1 tree:7 indexed:1 minimal:1 toone:1 instance:15 ence:1 assignment:16 tractability:3 vertex:10 addressing:1 entry:1 uniform:1 johnson:2 gr:1 answer:2 nfer:1 st:1 international:2 probabilistic:6 michael:1 cesa:1 opposed:1 possibly:3 admit:3 return:5 potential:7 summarized:1 explicitly:1 cpts:1 doing:1 hazan:1 red:2 bayes:1 recover:1 contribution:2 minimize:5 atm:2 yield:3 correspond:2 yes:1 generalize:1 bayesian:1 mere:1 expertise:1 worth:1 definition:5 obvious:2 naturally:2 proof:5 treatment:2 ask:2 recall:1 higher:1 specify:1 formulation:9 done:1 strongly:1 generality:2 furthermore:1 implicit:1 nonlinear:1 aj:7 perhaps:5 believe:1 former:1 equality:3 hence:2 nonzero:1 game:2 complete:1 demonstrate:1 theoretic:1 passage:1 postprocessing:1 meaning:2 variational:2 novel:1 common:2 overview:1 conditioning:2 exponentially:1 discussed:2 m1:13 marginals:52 interpret:1 cambridge:2 ai:12 consistency:7 mathematics:1 similarly:1 language:1 specification:4 nicolo:1 closest:4 recent:1 optimizing:1 inequality:3 binary:1 yiai:1 inconsistency:1 yi:4 additional:2 unrestricted:1 greater:1 novelty:1 maximize:1 signal:1 ii:1 full:6 reduces:8 infer:1 technical:1 adapt:1 concerning:1 equally:1 halt:1 prediction:1 involving:1 basic:2 essentially:1 arxiv:1 iteration:2 normalization:2 addressed:2 source:2 asz:1 induced:9 subject:4 undirected:2 member:1 inconsistent:5 spirit:1 seem:1 jordan:1 integer:1 noting:1 counting:1 iii:1 independence:2 xj:1 pennsylvania:1 identified:1 imperfect:1 regarding:4 idea:2 reduce:2 whether:4 six:1 effort:1 sontag:1 passing:3 generally:5 useful:2 detailed:1 informally:2 induces:1 diameter:1 reduced:8 specifies:1 per:2 correctly:1 key:2 four:3 falling:1 clarity:1 graph:42 relaxation:2 run:3 parameterized:1 powerful:1 place:1 family:3 chandrasekaran:2 separation:18 coherence:1 appendix:5 decision:5 bound:3 ct:2 oracle:18 roughgarden:1 constraint:26 precisely:1 hew:1 tal:1 generates:1 simulate:1 argument:1 optimality:1 formulating:1 attempting:1 relatively:1 onsistency:36 combination:1 lp:6 intuitively:1 restricted:1 computationally:1 previously:1 turn:4 discus:1 know:1 flip:2 tractable:2 forge:1 appropriate:1 indirectly:1 vishnoi:1 coin:2 original:4 assumes:1 running:1 include:1 denotes:1 graphical:9 invokes:1 establish:1 classical:1 objective:7 question:2 already:1 occurs:1 ssp:1 otschel:1 outer:1 gun:1 polytope:6 considers:1 reason:2 toward:1 willsky:2 length:4 index:1 ellipsoid:20 minimizing:1 equivalently:1 potentially:1 negative:1 stated:1 ambitious:1 perform:1 bianchi:1 upper:1 observation:1 markov:18 witness:3 rn:3 perturbation:2 arbitrary:2 cast:1 required:2 specified:2 pair:4 connection:4 nip:1 address:1 beyond:1 suggested:1 articulation:1 challenge:3 program:22 max:4 mall:8 wainwright:1 natural:2 force:1 rely:1 circumvent:1 improve:1 historically:1 republican:1 eye:1 concludes:1 negativity:2 extract:1 prior:1 literature:2 reducibility:1 disagrees:1 nice:1 checking:1 determining:1 geometric:1 expect:1 lecture:1 interesting:2 proportional:2 fictitious:1 analogy:1 ingredient:1 at1:2 foundation:1 sufficient:1 consistent:35 principle:1 intractability:3 share:1 summary:1 free:1 formal:3 allow:1 fall:7 absolute:1 sparse:1 dimension:2 xn:1 ntropy:6 made:1 bm:2 far:1 approximate:1 compact:2 emphasize:1 implicitly:2 conceptual:1 b1:2 conclude:1 xi:2 search:1 reality:1 transfer:2 domain:2 aistats:1 main:7 linearly:2 arrow:2 arise:2 edition:1 verifies:1 fair:1 succinct:1 x1:1 wish:3 explicit:1 exponential:8 candidate:3 lie:1 theorem:15 eneralized:6 specific:3 showing:3 admits:4 evidence:1 normalizing:1 exists:2 workshop:1 adding:1 effectively:1 ci:1 magnitude:1 conditioned:1 easier:1 entropy:11 simply:4 likely:1 lagrange:1 ordered:1 partially:2 applies:1 springer:1 satisfies:7 viewed:2 goal:3 exposition:1 feasible:4 except:1 kearns:1 lemma:12 total:1 duality:2 schrijver:1 vote:1 formally:2 support:15 latter:2 |
4,375 | 496 | Constrained Optimization Applied to the
Parameter Setting Problem for Analog Circuits
David Kirk, Kurt Fleischer, Lloyd Watts~ Alan Barr
Computer Graphics 350-74
California Institute of Technology
Pasadena, CA 91125
Abstract
We use constrained optimization to select operating parameters for two
circuits: a simple 3-transistor square root circuit, and an analog VLSI
artificial cochlea. This automated method uses computer controlled measurement and test equipment to choose chip parameters which minimize
the difference between the actual circuit's behavior and a specified goal
behavior. Choosing the proper circuit parameters is important to compensate for manufacturing deviations or adjust circuit performance within
a certain range. As biologically-motivated analog VLSI circuits become
increasingly complex, implying more parameters, setting these parameters
by hand will become more cumbersome. Thus an automated parameter
setting method can be of great value [Fleischer 90]. Automated parameter
setting is an integral part of a goal-based engineering design methodology
in which circuits are constructed with parameters enabling a wide range
of behaviors, and are then "tuned" to the desired behaviors automatically.
1
Introduction
Constrained optimization methods are useful for setting the parameters of analog
circuits. We present two experiments in which an automated method successfully
finds parameter settings which cause our circuit's behavior to closely approximate
the desired behavior. These parameter-setting experiments are described in Section 3. The difficult subproblems encountered were (1) building the electronic setup
?Dept of Electrical Engineering 116-81
789
790
Kirk, Fleischer, Watts, and Barr
to acquire the data and control the circuit, and (2) specifying the computation of deviation from desired behavior in a mathematical form suitable for the optimization
tools. We describe the necessary components of the electronic setup in Section 2,
and we discuss the selection of optimization technique toward the end of Section 3.
Automated parameter setting can be an important component of a system to build
accurate analog circuits. The power of this method is enhanced by including appropriate parameters in the initial design of a circuit: we can build circuits with a wide
range of behaviors and then "tune" them to the desired behavior. In Section 6, we
describe a comprehensive design methodology which embodies this strategy.
2
Implementation
We have assembled a system which allows us to test these ideas. The system can
be conceptually decomposed into four distinct parts:
circuit: an analog VLSI chip intended to compute a particular function.
target function: a computational model quantitatively describing the desired behavior of the circuit . This model may have the same parameters as the circuit,
or may be expressed in terms of biological data that the circuit is to mimic.
error metric: compares the target to the actual circuit function, and computes a
difference measure.
constrained optimization tool: a numerical analysis tool, chosen based on the
characteristics of the particular problem posed by this circuit.
Constrained Optimization Tool
parameters
Circuit
difference
measure
Target Function
The constrained optimization tool uses the error metric to compute the difference
between the performance of the circuit and the target function. It then adjusts
the parameters to minimize the error metric, causing the actual circuit behavior to
approach the target function as closely as possible.
2.1
A Generic Physical Setup for Optimization
A typical physical setup for choosing chip parameters under computer control has
the following elements: an analog VLSI circuit, a digital computer to control the
optimization process, computer programmable voltage/current sources to drive the
chip, and computer programmable measurement devices, such as electrometers and
oscilloscopes, to measure the chip's response.
The combination of all of these elements provides a self-contained environment for
testing chips. The setting of parameters can then be performed at whatever level
Constrained Optimization Applied
to
the Parameter Setting Problem for Analog Circuits
of automation is desirable. In this way, all inputs to the chip and all measurements
of the outputs can be controlled by the computer.
3
The Experiments
We perform two experiments to set parameters of analog VLSI circuits using constrained optimization. The first experiment is a simple one-parameter system, a
3-transistor "square root" circuit. The second experiment uses a more complex
time-varying multi-parameter system, an analog VLSI electronic cochlea. The artificial cochlea is composed of cascaded 2nd order section filters .
3.1
Square Root Experiment
In the first experiment we examine a "square-root" circuit [Mead 89], which actually
computes ax Ci + b, where a is typically near 0.4. We introduce a parameter (V) into
this circuit which varies a indirectly. By adjusting the voltage V in the square root
circuit, as shown in Figure l(a), we can alter the shape of the response curve .
.... .....
1.6
~
1.4
1.0
;i
.8
.............
??? 1i"" " " " "' /
.../ ..~.//
1.2
"~
... -.,.
~ .......... .
......
..51
~/
.6
x ,I':
v
.4
re,/
? ?sqrt(x)+b """"
chip data ?
~:
.2
o
2
3
4
lin (10e-6)
Figure 1: (a) Square root circuit. (b) Resulting fit.
We have little control over the values of a and b in this circuit, so we choose an
error metric which optimizes a, targeting a curve which has a slope of 0.5 in log-log
lin vs. lout space. Since b < <
we can safely ignore b for the purposes of this
parameter-setting experiment. The entire optimization process takes only a few
minutes for this simple one-parameter system . Figure l(b) shows the final results
of the square root computation, with the circuit output normalized by a and b.
avx,
3.2
Analog VLSI Cochlea
As an example of a more complex system on which to test the constrained optimization technique, we chose a silicon cochlea, as described by [Lyon 88]. The silicon
cochlea is a cascade of lowpass second-order filter sections arranged such that the
natural frequency r of the stages decreases exponentially with distance into the
791
792
Kirk, Fleischer, Watts, and Barr
cascade, while the quality factor Q of the filters is the same for each section (tap).
The value of Q determines the peak gain at each tap.
Va
---r-------------------,-----------------
____---''---_________ VD'Igh!
V~--~------------------~----------------
Figure 2: Cochlea circuit
To specify the performance of such a cochlea, we need to specify the natural frequencies of the first and last taps, and the peak gain at each tap. These performance
parameters are controlled by bias voltages VTL VTR , and VQ, respectively. The
parameter-setting problem for this circuit is to find the bias voltages that give the
desired performance. This optimization task is more lengthy than the square root
optimization. Each measurement of the frequency response takes a few minutes,
since it is composed of many individual instrument readings.
3.2.1
Cochlea Results
The results of our attempts to set parameters for the analog VLSI cochlea are quite
encouragmg.
45
40
]'
is
35
~
30
150
2S
Q
en
?
...
cE
~
First tap - - Last tap --- ---
20
IS
10
5
0
10
20
30
40
SO
60
70
80
Optinrization Step
Figure 3: Error metric trajectories for gradient descent on cochlea
Figure 3 shows the trajectories of the error metrics for the first and last tap of the
cochlea. Most of the progress is made in the early steps, after which the optimization
Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits
is proceeding along the valley of the error surface, shown in Figure 5.
8
6
\
4
2
o
-2
:.:.. . . ...-.
.......
.............
..../~:.::-
\
'\
1\\
~__~~
..;:..~--..--~~r--~,~.:
-4
-6
,
\
,/
~
~.\
'.
\
\i
\
\
First tap goal - First tap data -+--Last tap goal ... . --.
Last tap data _....-....
-8
100
1000
Frequency
10000
Figure 4: Target frequency response and gradient descent optimized data for cochlea
Figure 4 shows both the target frequency response data and the frequency responses
which result from our chosen parameter settings. The curves are quite similar, and
the differences are at the scale of measurement noise and instrument resolution in
our system .
3.2.2
Cochlea Optimization Strategies
We explored several optimization strategies for finding the best parameters for the
electronic cochlea. Of these, two are of particular interest:
special knowledge: use a priori knowledge of the effect of each knob to guide the
optimization
gradient descent: assume that we know nothing except the input/output relation
of the chip. Then we can estimate the gradient for gradient descent by varying
the inputs. Robust numerical techniques such as conjugate gradient can also
be helpful when the energy landscape is steep.
We found the gradient descent technique to be reliable, although it did not converge
nearly as quickly as the "special knowledge" optimization. This corresponds with
our intuition that any special knowledge we have about the circuit's operation will
aid us in setting the parameters.
4
Choosing An Appropriate Optimization Method
One element of our system which has worked without much difficulty is the optimization. However, more complex circuits may require more sophisticated optimization
methods. A wide variety of constrained optimization algorithms exist which are
793
794
Kirk, Fleischer, Watts, and Barr
Figure 5: The error surface for the error metric for the frequency response of the
first tap of the cochlea. Note the narrow valley in the error surface. Our target (the
minimum) lies near the far left, at the deepest part of the valley.
effective on particular classes of problems (gradient descent, quasi-newton, simulated annealing, etc) [Platt 89, Gill 81, Press 86, Fleischer 90], and we can choose
a method appropriate to the problem at hand. Techniques such as simulated annealing can find optimal parameter combinations for multi-parameter systems with
complex behavior, which gives us confidence that our methods will work for more
complex circuits.
The choice of error metric may also need to be reconsidered for more complex
circuits. For systems with time-varying signals, we can use an error metric which
captures the time course of the signal. We can deal with hysteresis by beginning at
a known state and following the same path for each optimization step. Noisy and
non-smooth functions can be improved by averaging data and using robust numeric
techniques which are less sensitive to noise.
5
Conclusions
The constrained optimization technique works well when a well-defined goal for chip
operation can be specified . We can compare automated parameter setting with
adjustment by hand: consider that humans often fail in the same situations where
optimization fails (eg. multiple local minima). In contrast, for larger dimensional
spaces, hand adjustment is very difficult, while an optimization technique may
succeed. We expect to integrate the technique into our chip development process,
and future developments will move the optimization and learning process gradually
into the chip . It is interesting to note that our gradient descent method "learns"
the parameters of the chip in a manner similar to backpropagation. Seen from this
Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits
perspective, this work is a step on the path toward robust on-chip learning.
In order to use this technique, there are two moderately difficult problems to address. First, one must assemble and interface the equipment to set parameters
and record results from the circuit under computer control (eg. voltage and current sources, electrometer, digital oscilloscope, etc). This is a one-time cost since a
similar setup can be used for many different circuits. A more difficult issue is how
to specify the target function of a circuit, and how to compute the error metric.
For example, in the simple square-root circuit, one might be more concerned about
behavior in a particular region, or perhaps along the entire range of operation. Care
must be taken to ensure that the combination of the target model and the error
metric accurately describes the desired behavior of the circuit.
The existence of an automated parameter setting mechanism opens up a new avenue
for producing accurate analog circuits. The goal of accurately computing a function
differs from the approach of providing a cheap (simple) circuit which loosely approximates the function [Gilbert 68] [Mead 89]. By providing appropriate parameters
in the design of a circuit, we can ensure that the desired function is in the domain
of possible circuit behaviors (given expected manufacturing tolerances). Thus we
define the domain of the circuit in anticipation of the parameter setting apparatus.
The optimization methods will then be able to find the best solution in the domain,
which could potentially be accurate to a high degree of precision.
6
The Goal-based Engineering Design Technique
The results of our optimization experiments suggest the adoption of a comprehensive
Goal-based Engineering Design Technique that directly affects how we design and
test chips.
Our results change the types of circuits we will try to build. The optimization
techniques allow us to aggresively design and build ambitious circuits and more
frequently have them work as expected, meeting our design goals. As a corollary,
we can confidently attack larger and more interesting problems.
The technique is composed of the following four steps:
1) goal-setting: identify the target function, or behavioral goals, of the design
2) circuit design: design the circuit with "knobs" (adjustable parameters) in it,
attempting to make sure desired (target) circuit behavior is in gamut of the
actual circuit, given expected manufacturing variation and device characteristics.
3) optimization plan: devise optimization strategy to explore parameter settings. This includes capabilities such as a digital computer to control the optimization, and computer-driven instruments which can apply voltages/currents
to the chip and measure voltage/current outputs.
4) optimization: use optimization procedure to select parameters to minimize deviation of actual circuit performance from the target function the optimization
may make use of special knowledge about the circuit, such as "I know that this
knob has effect x," or interaction, such as "I know that this is a good region,
so explore here."
795
796
Kirk, Fleischer, Watts, and Barr
Design Goals
Circuit Design
Circuit
Optimization Plan 1-------.
The goal-setting process produces design goals that influence both the circuit design
and the form of the optimization plan. It is important to produce a match between
the design of the circuit and the plan for optimizing its parameters.
Acknowledgements
Many thanks to Carver Mead for ideas, encouragement, and support for this project.
Thanks also to John Lemoncheck for help getting our physical setup together.
Thanks to Hewlett-Packard for equipment donation. This work was supported
in part by an AT&T Bell Laboratories Ph.D. Fellowship. Additional support was
provided by NSF (ASC-89-20219). All opinions, findings, conclusions, or recommendations expressed in this document are those of the author and do not necessarily
reflect the views of the sponsoring agencies.
References
[Fleischer 90] Fleischer, K., J. Platt, and A. Barr, "An Approach to Solving the
Parameter Setting Problem," IEEE/ ACM 23rd IntI Conf on System Sciences, January 1990.
[Gilbert 68] Gilbert, B., "A Precise Four-Quadrant Multiplier with Sub-nanosecond
Response," IEEE Journal of Solid-State Circuits, SC-3:365, 1968.
[Gill 81]
Gill, P. E., W. Murray, and M. H. Wright, "Practical Optimization,"
Academic Press, 1981.
[Lyon 88] Lyon, R. A., and C. A. Mead, "An Analog Electronic Cochlea," IEEE
Trans. Acous. Speech, and Signal Proc., Volume 36, Number 7, July,
1988, pp. 1119-1134.
[Mead 89] Mead, C. A., "Analog VLSI and Neural Systems," Addison-Wesley, 1989.
[Platt 89] Platt, J. C., "Constrained Optimization for Neural Networks and Computer Graphics," Ph.D. Thesis, California Institute of Technology,
Caltech-CS-TR-89-07, June, 1989.
[Press 86] Press, W., Flannery, B., Teukolsky, S., Vetterling, W ., "Numerical
Recipes: the Art of Scientific Computing," Cambridge University Press,
Cambridge, 1986.
| 496 |@word nd:1 open:1 tr:1 solid:1 initial:1 tuned:1 document:1 kurt:1 current:4 must:2 john:1 numerical:3 shape:1 cheap:1 v:1 implying:1 device:2 beginning:1 record:1 provides:1 attack:1 mathematical:1 along:2 constructed:1 become:2 behavioral:1 manner:1 introduce:1 expected:3 behavior:16 oscilloscope:2 examine:1 frequently:1 multi:2 decomposed:1 automatically:1 actual:5 little:1 lyon:3 project:1 provided:1 circuit:64 finding:2 safely:1 platt:4 control:6 whatever:1 producing:1 engineering:4 local:1 apparatus:1 mead:6 path:2 might:1 chose:1 specifying:1 range:4 adoption:1 practical:1 testing:1 differs:1 backpropagation:1 procedure:1 bell:1 cascade:2 confidence:1 quadrant:1 anticipation:1 suggest:1 targeting:1 selection:1 valley:3 influence:1 gilbert:3 vtl:1 resolution:1 adjusts:1 variation:1 enhanced:1 target:13 us:3 element:3 electrical:1 capture:1 region:2 decrease:1 intuition:1 environment:1 agency:1 moderately:1 solving:1 lowpass:1 chip:16 distinct:1 describe:2 effective:1 artificial:2 sc:1 choosing:3 quite:2 posed:1 larger:2 reconsidered:1 noisy:1 final:1 transistor:2 interaction:1 causing:1 getting:1 recipe:1 produce:2 help:1 donation:1 progress:1 c:1 closely:2 filter:3 human:1 opinion:1 barr:6 require:1 biological:1 wright:1 great:1 early:1 purpose:1 proc:1 sensitive:1 successfully:1 tool:5 varying:3 voltage:7 knob:3 corollary:1 ax:1 june:1 contrast:1 equipment:3 helpful:1 vetterling:1 typically:1 entire:2 pasadena:1 vlsi:9 relation:1 quasi:1 issue:1 priori:1 development:2 plan:4 constrained:14 special:4 art:1 nearly:1 alter:1 mimic:1 future:1 quantitatively:1 few:2 composed:3 comprehensive:2 individual:1 intended:1 attempt:1 interest:1 asc:1 adjust:1 hewlett:1 accurate:3 integral:1 necessary:1 carver:1 loosely:1 desired:9 re:1 cost:1 deviation:3 graphic:2 varies:1 thanks:3 peak:2 together:1 quickly:1 thesis:1 reflect:1 choose:3 conf:1 lloyd:1 automation:1 includes:1 hysteresis:1 performed:1 root:9 try:1 view:1 capability:1 slope:1 minimize:3 square:9 characteristic:2 identify:1 landscape:1 conceptually:1 accurately:2 trajectory:2 drive:1 sqrt:1 cumbersome:1 lengthy:1 energy:1 frequency:8 pp:1 gain:2 adjusting:1 knowledge:5 sophisticated:1 actually:1 wesley:1 methodology:2 response:8 specify:3 improved:1 arranged:1 stage:1 hand:4 quality:1 perhaps:1 scientific:1 building:1 effect:2 normalized:1 multiplier:1 laboratory:1 deal:1 eg:2 self:1 sponsoring:1 interface:1 nanosecond:1 physical:3 exponentially:1 volume:1 analog:17 approximates:1 measurement:5 silicon:2 cambridge:2 encouragement:1 rd:1 operating:1 surface:3 etc:2 perspective:1 optimizing:1 optimizes:1 driven:1 certain:1 meeting:1 devise:1 caltech:1 seen:1 minimum:2 additional:1 care:1 gill:3 converge:1 signal:3 july:1 multiple:1 desirable:1 alan:1 smooth:1 match:1 academic:1 compensate:1 lin:2 controlled:3 va:1 metric:11 cochlea:17 fellowship:1 annealing:2 source:2 sure:1 near:2 concerned:1 automated:7 variety:1 electrometer:2 fit:1 affect:1 idea:2 avenue:1 fleischer:9 motivated:1 speech:1 cause:1 programmable:2 useful:1 tune:1 ph:2 exist:1 nsf:1 four:3 ce:1 electronic:5 encountered:1 assemble:1 worked:1 attempting:1 watt:5 combination:3 conjugate:1 describes:1 increasingly:1 biologically:1 gradually:1 inti:1 taken:1 vq:1 discus:1 describing:1 fail:1 mechanism:1 know:3 addison:1 instrument:3 end:1 operation:3 apply:1 appropriate:4 generic:1 indirectly:1 existence:1 ensure:2 newton:1 embodies:1 build:4 murray:1 move:1 strategy:4 gradient:9 distance:1 simulated:2 vd:1 toward:2 providing:2 acquire:1 difficult:4 setup:6 steep:1 potentially:1 subproblems:1 design:17 implementation:1 proper:1 ambitious:1 adjustable:1 perform:1 enabling:1 descent:7 january:1 situation:1 precise:1 david:1 specified:2 optimized:1 tap:12 california:2 narrow:1 assembled:1 address:1 able:1 trans:1 lemoncheck:1 lout:1 reading:1 confidently:1 including:1 reliable:1 packard:1 power:1 suitable:1 natural:2 difficulty:1 cascaded:1 technology:2 gamut:1 deepest:1 acknowledgement:1 expect:1 interesting:2 digital:3 integrate:1 degree:1 course:1 supported:1 last:5 bias:2 guide:1 allow:1 institute:2 wide:3 tolerance:1 curve:3 numeric:1 computes:2 author:1 made:1 acous:1 far:1 approximate:1 ignore:1 robust:3 ca:1 complex:7 necessarily:1 domain:3 did:1 noise:2 nothing:1 en:1 aid:1 precision:1 fails:1 sub:1 lie:1 kirk:5 learns:1 minute:2 explored:1 ci:1 flannery:1 explore:2 expressed:2 contained:1 adjustment:2 recommendation:1 corresponds:1 determines:1 acm:1 teukolsky:1 succeed:1 goal:14 manufacturing:3 change:1 typical:1 except:1 averaging:1 select:2 support:2 dept:1 vtr:1 |
4,376 | 4,960 | Learning Chordal Markov Networks by
Constraint Satisfaction
Jukka Corander??
University of Helsinki
Finland
Tomi Janhunen??
Aalto University
Finland
Jussi Rintanen???
Aalto University
Finland
Henrik Nyman?
? Akademi University
Abo
Finland
Johan Pensar?
? Akademi University
Abo
Finland
Abstract
We investigate the problem of learning the structure of a Markov network from
data. It is shown that the structure of such networks can be described in terms of
constraints which enables the use of existing solver technology with optimization
capabilities to compute optimal networks starting from initial scores computed
from the data. To achieve efficient encodings, we develop a novel characterization of Markov network structure using a balancing condition on the separators
between cliques forming the network. The resulting translations into propositional satisfiability and its extensions such as maximum satisfiability, satisfiability
modulo theories, and answer set programming, enable us to prove optimal certain
networks which have been previously found by stochastic search.
1
Introduction
Graphical models (GMs) represent the backbone of the generic statistical toolbox for encoding dependence structures in multivariate distributions. Using Markov networks or Bayesian networks
conditional independencies between variables can be readily communicated and used for various
computational purposes. The development of the statistical theory of GMs is largely set by the
seminal works of Darroch et al. [1] and Lauritzen and Wermuth [2]. Although various approaches
have been developed to generalize the theory of graphical models to allow for modeling of more
complex dependence structures, Markov networks and Bayesian networks are still widely used in
applications ranging from genetic mapping of diseases to machine learning and expert systems.
Bayesian learning of undirected GMs, also known as Markov random fields, from databases has
attained a considerable interest, both in the statistical and computer science literature [3, 4, 5, 6, 7,
8, 9]. The cardinality and complex topology of GM space pose difficulties with respect to both the
computational complexity of the learning task and the reliability of reaching representative model
structures. Solutions to these problems have been proposed in earlier work. Della Pietra et al. [10]
present a greedy local search algorithm for Markov network learning and apply it to discovering
word morphology. Lee et al. [11] reduce the learning problem to a convex optimization problem
that is solved by gradient descent. Related methods have been investigated later [12, 13].
?
This work was funded by the Academy of Finland, project 251170.
Funded by ERC grant 239784.
?
Also affiliated with the Helsinki Institute of Information Technology, Finland.
?
Also affiliated with Griffith University, Brisbane, Australia.
?
? Akademi University, as part of the grant for the Center of
This work was funded by the Foundation of Abo
Excellence in Optimization and Systems Engineering.
?
1
Certain types of stochastic search methods, such as Markov Chain Monte Carlo (MCMC) or simulated annealing, can be proven to be consistent with respect to the identification of a structure maximizing posterior probability [4, 5, 6, 7]. However, convergence of such methods towards the areas
associated with high posterior probabilities may still be slow when the number of nodes increases
[4, 6]. In addition, it is challenging to guarantee that the identified model indeed truly represents
the global optimum since the consistency of MCMC estimates is by definition a limit result. To the
best of our knowledge, strict constraint-based search methods have not been previously applied in
learning of Markov random fields. In this article, we formalize the structure of Markov networks
using constraints at a general level. This enables the development of reductions from the structure
learning problem to propositional satisfiability (SAT) [14] and its generalizations such as maximum
satisfiability (MAXSAT) [15], and satisfiability modulo theories (SMT) [16], as well as answer-set
programming (ASP) [17]. A main novelty is the recognition of maximum weight spanning trees
of the clique graph by a condition on the cardinalities of occurrences of variables in cliques and
separators, which we call the balancing condition.
The article is structured as follows. We first review some details of Markov networks and the respective structure learning problem in Section 2. To enable efficient encodings of Markov network
learning as a constraint satisfaction problem, in Section 3 we establish a new characterization of
the separators of a Markov network based on a balancing condition. In Section 4, we provide a
high-level description how the learning problem can be expressed using constraints and sketch the
actual translations into propositional satisfiability (SAT) and its generalizations. We have implemented these translations and conducted experiments to study the performance of existing solver
technology on structure learning problems in Section 5 using two widely used datasets [18]. Finally,
some conclusions and possibilities for further research in this area are presented in Section 6.
2
Structure Learning for Markov Networks
An undirected graph G = hV, Ei consists of a set of nodes V which represents a set of random
variables and a set of undirected edges E ? {{n, n0 } | n, n0 ? V and n 6= n0 }. A path in a graph
is a sequence of nodes such that every two consecutive nodes are connected by an edge. Two sets of
nodes A and B are said to be separated by a third set of nodes D if every path between a node in
A and a node in B contains at least one node in D. An undirected graph is chordal if for all paths
n0 , . . . nk with k ? 4 and n0 = nk there exist two nodes ni , nj in the path connected by an edge
such that j 6= i ? 1. A clique in a graph is a set of nodes c such that every two nodes in it are
connected by an edge. In addition, there may not exist a set of nodes c0 such that c ? c0 and every
two nodes in c0 are connected by an edge. Given the set of cliques C in a chordal graph, the set of
separators S can be obtained through intersections of the cliques ordered in terms of a junction tree
[19], this operation is considered thoroughly in Section 3.
A Markov network is defined as a pair consisting of a graph G and a joint distribution PV over
the variables in V . The graph specifies the dependence structure of the variables and PV factorizes
according to G (see below). Given G it is possible to ascertain if two sets of variables A and B are
conditionally independent given another set of variables D, due to the global Markov property
A ?? B | D, if D separates A from B.
For a Markov network with a chordal graph G, the probability of a joint outcome x factorizes as
Q
Pci (xci )
PV (x) = Qci ?C
.
si ?S Psi (xsi )
Following this factorization the marginal likelihood of a dataset X given a Markov network with a
chordal graph G can be written
Q
Pci (Xci )
.
P (X|G) = Qci ?C
si ?S Psi (Xsi )
By a suitable choice of prior distribution, the terms Pci (Xci ) and Psi (Xsi ) can be calculated analytically. Let a denote an arbitrary clique or separator containing the variables Xa whose outcome
(j)
(j)
space has the cardinality k. Further, let na denote the number of occurrences where Xa = xa in
2
the dataset Xa . Now assign the Dirichlet(?a1 , . . . , ?ak ) distribution as prior over the probabilities
(j)
Pa (Xa = xa ) = ?j , determining the distribution Pa (Xa ). Now Pa (Xa ) can be calculated as
Z Y
k
(j)
Pa (Xa ) =
(?j )na ? ?a (?)d?
? j=1
where ?a (?) is the density function of the Dirichlet prior distribution. By the standard properties of
the Dirichlet integral, Pa (Xa ) can be reduced to the form
Pa (Xa ) =
(j)
k
?(?) Y ?(na + ?aj )
?(na + ?) j=1
?(?aj )
where ?(?) denotes the gamma function and
?=
k
X
?aj
and
j=1
na =
k
X
n(j)
a .
j=1
When dealing with the marginal likelihood of a dataset it is most often necessary to use the logarithmic value log P (X|G). Introducing the notations v(ci ) = log Pci (Xci ) the logarithmic value of the
marginal likelihood can be written
X
X
X
X
log P (X|G) =
log Pci (Xci ) ?
log Psi (Xsi ) =
v(ci ) ?
v(si ).
(1)
ci ?C
si ?S
ci ?C
si ?S
The learning problem is to find a graph G that optimizes the posterior distribution
P (X|G)P (G)
.
P (G|X) = P
G?G P (X|G)P (G)
Here G denotes the set of all graphs under consideration and P (G) is the prior probability assigned
to G. In the case where a uniform prior is used for the graphs the optimization problem reduces to
finding the graph with the largest marginal likelihood.
3
Fundamental Properties and Characterization Results
In this section, we point out some properties of chordal graphs and clique graphs that can be utilized
in the encodings of the learning problem. In particular, we develop a characterization of maximum
weight spanning trees in terms of a balancing condition on separators.
The separators needed for determining the score (1) of a candidate Markov network are defined as
follows. Given the cliques, we can form the clique graph, in which the nodes are the cliques and
there is an edge between two nodes if the corresponding cliques have a non-empty intersection.
We label each of the edges with this intersection and consider the cardinality of the label as its
weight. The separators are the edge labels of a maximum weight spanning tree of the clique graph.
Maximum weight spanning trees of arbitrary graphs can be found in polynomial time by reducing
the problem to finding minimum weight spanning trees. This reduction consists of negating all the
edge weights and then using any of the polynomial time algorithms for the latter problem [20]. There
may be several maximum weight spanning trees, but they induce exactly the same separators, and
they only differ in terms of which pairs of cliques induce the separators.
To restrict the search space we can observe that a chordal graph with n nodes has at most n maximal
cliques [19]. This gives an immediate upper bound on the number of cliques chosen to build a
Markov network, which can be encoded as a simple cardinality constraint.
3.1
Characterization of Maximum Weight Spanning Trees
To simplify the encoding of maximum weight spanning trees (and forests) of chordal clique graphs,
we introduce the notion of balanced spanning trees (respectively, forests), and show that these two
concepts coincide for chordal graphs. Then separators can be identified more effectively: rather than
encoding an algorithm for finding maximum-weight spanning trees as constraints, it is sufficient to
select a subset of the edges of the clique graph that is acyclic and satisfies the balancing condition
expressible as a cardinality constraint over occurrences of nodes in cliques and separators.
3
Definition 1 (Balancing) A spanning tree (or forest) of a clique graph is balanced if for every node
n, the number of cliques containing n is one higher than the number of labeled edges containing n.
While in the following we state many results for spanning trees only, they can be straightforwardly
generalized to spanning forests as well (in case the Markov networks are disconnected.)
Lemma 2 For any clique graph, all its balanced spanning trees have the same weight.
Proof: This holds in general because the balancing condition requires exactly the same number of
occurrences of any node in the separator edges for any balanced spanning tree, and the weight is
defined as the sum of the occurrences of nodes in the edge labels.
Lemma 3 ([21, 22]) Any maximum weight spanning tree of the clique graph is a junction tree, and
hence satisfies the running intersection property: for every pair of nodes c and c0 , (c ? c0 ) ? c00 for
all nodes c00 on the unique path between c and c0 .
Lemma 4 Let T = hV, ET i be a maximum weight spanning tree of the clique graph hV, Ei of a
connected chordal graph. Then T is balanced.
Proof: We order the tree by choosing an arbitrary clique as the root and by assigning a depth to all
nodes according to their distance from the root node. The rest of the proof proceeds by induction on
the height of subtrees starting from the leaf nodes as the base case. The induction hypothesis says
that all subtrees satisfy the balancing condition. The base cases are trivial: each leaf node (clique)
trivially satisfies the balancing condition, as there are no separators to consider.
In the inductive cases, we have a clique c at depth d, connected to one or more subtrees rooted at
neighboring cliques c1 , . . . , ck at depth d + 1, with the subtrees satisfying the balancing condition.
We show that the tree consisting of the clique c, the labeled edges connecting c respectively to
cliques c1 , . . . , ck , and the subtrees rooted at c1 , . . . , ck , satisfies the balancing condition.
First note that by Lemma 3, any maximum weight spanning tree of the clique graph is a junction
tree and hence satisfies the running intersection property, meaning that for any two cliques c1 and c2
in the tree, every clique on the unique path connecting them includes c1 ? c2 .
We have to show that the subtree rooted at c is balanced, given that its subtrees are balanced. We
show that the balancing condition is satisfied for each node separately. So let n be one of the nodes
in the original graph. Now each of the subtrees rooted at some ci has either 0 occurrences of n, or
ki ? 1 occurrences in the cliques and ki ? 1 occurrences in the edge labels, because by the induction
hypothesis the balancing condition is satisfied. Four cases arise:
1. The node n does not occur in any of the subtrees.
Now the balancing condition is trivially satisfied for the subtree rooted at c, because n
either does not occur in c, or it occurs in c but does not occur in the label of any of the
edges to the subtrees.
2. The node n occurs in more than one subtree.
Since any maximum weight spanning tree is a junction tree by Lemma 3, n must occur
also in c and in the labels of the edges between c and the cliques in which the subtrees with
n are rooted. Let s1 , . . . , sj be the numbers of occurrences of n in the edge labels in the
subtrees with at least one occurrence of n, and t1 , . . . , tj the numbers of occurrences of n
in the cliques in the same subtrees.
By the induction hypothesis, these subtrees are balanced, and hence ti ? si = 1 for all
Pk
i ? {1, . . . , j}. The subtree rooted at c now has 1 + i=1 ti occurrences of n in the nodes
Pj
(once in c itself and then the subtrees) and j + i=1 si occurrences in the edge labels,
where the j occurrences are in the edges between c and the j subtrees.
4
We establish the balancing condition through a sequence of equalities. The first and the last
expression are the two sides of the condition.
Pj
Pk
(1 + i=1 ti ) ? (j + i=1 si )
Pj
= 1 ? j + i=1 (ti ? si )
reordering the terms
=1?j+j
since ti ? si = 1 for every subtree
=1
Hence also the subtree rooted at c is balanced.
3. The node n occurs in one subtree and in c.
Let i be the index of the subtree in which n occurs. Since any maximum weight spanning
tree is a junction tree by Lemma 3, n must occur also in the clique ci . Hence n occurs in
the label of the edge from ci to c. Since the subtree is balanced, the new graph obtained by
adding the clique c and the edge with a label containing n is also balanced. Further, adding
all the other subtrees that do not contain n will not affect the balancing of n.
4. The node n occurs in one subtree but not in c.
Since there are n occurrences of n in any of the other subtrees, in c, or in the edge labels
between c and any of the subtrees, the balancing condition holds.
This completes the induction step and consequently, the whole spanning tree is balanced.
Lemma 5 Assume T = hV, EB i is a spanning tree of the clique graph GC = hV, Ei of a chordal
graph that satisfies the balancing condition. Then T is a maximum weight spanning tree of GC .
Proof: Let TM be one of the spanning trees of GC with the maximum weight w. By Lemma 4, this
maximum weight spanning tree is balanced. By Lemma 2, T has the same weight w as TM . Hence
also T is a maximum weight spanning tree of GC .
Theorem 6 For any clique graph of a chordal graph, any of its subgraphs is a maximum weight
spanning tree if and only if it is a balanced acyclic subgraph.
4
Representation as Constraints
In this section we first show how the structure learning problem of Markov networks is cast as a
constraint satisfaction problem, and then formalize it concretely in the language of propositional
logic, as directly supported by SMT solvers and easily translatable into conjunctive normal form as
used by SAT and MAXSAT solvers. In ASP slightly different rule-based formulations are used.
The learning problem is formalized as follows. The goal is to find a balanced spanning tree (cf. Definition 1) for a set C of cliques forming a Markov network and the set S of separators induced by
the tree structure. P
In addition, C P
and S are supposed to be optimal in the sense of (1), i.e., the overall
score v(C, S) = c?C v(c) ? s?S v(s) is maximized. The individual score v(c) for any set of
nodes c describes how well it reflects the interdependencies of the variables in c in the data.
Definition 7 Let V be a set of nodes representing random variables and v : 2V ? R a scoring
function. A solution to the Markov network learning problem is a set of cliques C = {c1 , . . . , ck }
satisfying the following requirements viewed as abstract constraints:
1. Every node is included in at least one of the chosen cliques in C, i.e.,
Sk
i=1 ci
=V.
2. Cliques in C are maximal, i.e.,
(a) for every c, c0 ? C, if c ? c0 , then c = c0 ; and
S
(b) for every c ? V , if edges(c) ? c0 ?C edges(c0 ), then c ? c0 for some c0 ? C
where edges(c) = {{n, n0 } ? c | n 6= n0 } is defined for each c ? V .
S
3. The graph hV, Ei with the set of edges E = c?C edges(c) is chordal.
5
4. The set C has a balanced spanning tree labeled by a set of separators S = {s1 , . . . , sl }.
Moreover, the solution is optimal if it maximizes the overall score v(C, S).
The encodings of basic graph properties (conditions 1 and 2 above) are presented Section 4.1. The
more complex properties (3 and 4) are addressed in Sections 4.2 and 4.3.
4.1
Graph Properties
We assume that clique candidates ? which are the non-empty subsets of V ? are indexed from 1 to
2|V | . We often identify a clique with its index. Each clique candidate c ? V has an associated score
v(c). To encode the search space for Markov networks, we introduce, for every clique candidate
c, a propositional variable xc denoting that c is part of the learned network. We also introduce
propositional variables en,m that represent edges {n, m} that are in at least one chosen clique.1
To formalize condition 1 of Definition 7, for every node n we have the constraint
xc1 ? ? ? ? ? xck
(2)
where c1 , . . . , ck are all cliques c with n ? c.
To satisfy the maximality condition 2(a), we require that if a clique is chosen, then at least one edge
in each of its super-cliques is not chosen. We first make the edges of the chosen cliques explicit by
the next constraint for all {n, m} ? V and cliques c1 , . . . , ck such that {n, m} ? ci .
en,m ? (xc1 ? ? ? ? ? xck )
(3)
Then for every clique candidate c = {n1 , . . . , nk } and every node n ? V \c we have the constraint
xc ? (?en1 ,n ? ? ? ? ? ?enk ,n )
(4)
where en1 ,n , . . . , enk ,n represent all additional edges that would turn c ? {n} into a clique. For
each pair of clique candidates c and c0 such that c ? c0 , ?xc ? ?xc0 is a logical consequence of the
constraints (4). They are useful for strengthening the inferences made by SAT solvers.
For condition 2(b) we use propositional variables zc which mean that either c or one of its supercliques is chosen, and propositional variables wc which mean that all edges of c are chosen. For
2-element cliques c = {n1 , n2 } we have
wc ? en1 ,n2 .
(5)
For larger cliques c we have
wc ? wc1 ? ? ? ? ? wck
(6)
where c1 , . . . , ck are all subcliques of c with one less node than c. Hence wc is true iff all edges of c
are chosen. If all edges of a clique are chosen, then the clique itself or one of its super-cliques must
be chosen. If c1 , . . . , ck are all cliques that extend c by one node, this is encoded as follows.
wc ? zc
zc ? (xc ? zc1 ? ? ? ? ? zck )
4.2
(7)
(8)
Chordality
We use a straightforward encoding of the chordality condition (3) of Definition 7. The idea is to
generate constraints corresponding to every k ? 4 element subset S = {n1 , . . . , nk } of V . Let
us consider all cycles these nodes could form in the graph hV, Ei of condition 3 in Definition 7.
A cycle starts from a given node, goes through all other nodes, with (undirected) edges between
two consecutive nodes, and ends in the starting node. The number of constraints can be reduced
by two observations. First, the same cycle could be generated from different starting nodes, e.g.,
cycles n1 , n2 , n3 , n4 , n1 and n2 , n3 , n4 , n1 , n2 are the same. Second, generating the same cycle
in two opposite directions, as in n1 , n2 , n3 , n4 , n1 and n1 , n4 , n3 , n2 , n1 , is unnecessary. To avoid
1
As the edges are undirected, we limit to en,m such that the ordering of n and m according to some fixed
ordering is increasing, i.e., n < m. Under this assumption, em,n for n < m denotes en,m .
6
redundant cycle constraints, we arbitrarily fix the starting node and require that the index of the
second node in the cycle is lower than the index of the second last node. These restrictions guarantee
that every cycle associated with S is considered exactly once. Now, the chordality constraint says
that if there is an edge between every pair of consecutive nodes in n1 , . . . , nk , n1 , then there also
has to be an edge between at least one pair of two non-consecutive nodes. In the case k = 4, for
instance, this leads to formulas of the form
en1 ,n2 ? en2 ,n3 ? en3 ,n4 ? en4 ,n1 ? en1 ,n3 ? en2 ,n4 .
(9)
This encoding of chordality constraints is exponential in |V | and therefore not scalable to large
numbers of nodes. However, the datasets considered in Section 5 have only 6 or 8 variables, and in
these cases the exponentiality is not an issue.
4.3
Separators
Separators for pairs c and c0 of clique candidates can be formalized as propositional variables sc,c0 ,
meaning that c ? c0 is a separator and there is an edge in the spanning tree between c and c0 labeled
by c ? c0 . The corresponding constraint is
sc,c0 ? xc ? xc0 .
(10)
The lack of the converse implication formalizes the choice of the spanning tree, i.e., s
can be
false even if xc and xc0 are true. The remaining constraints on separators fall into two cases.
c,c0
First, we have cardinality constraints encoding the balancing condition (cf. Section 3.1): each variable occurs in the chosen cliques one more time than it occurs in the separators which label the
spanning tree. Cardinality constraints are natively supported by some constraint solvers, or they can
be reduced to Boolean constraints [23]. Second, the graph formed by the cliques with the separators
as edges must be acyclic. We encode this through an inductive definition of trees: repeatedly remove
leaf nodes, i.e., nodes with at most one neighbor, until all nodes have been removed. When applying
this definition to a cyclic graph, some nodes will remain in the end. We define the leaf level for each
node. A node is a level 0 leaf iff it has 0 or 1 neighbors in the graph. A node is a level l + 1 leaf iff
all its neighbors except possibly one are level j ? l leaves. This definition is directly expressible as
Boolean constraints. A graph with m nodes is acyclic iff all its nodes are level b m
2 c leaves.
5
Experimental Evaluation
The constraints described in Section 4 can be alternatively expressed as MAXSAT, SMT, or ASP
problems. We have used respective solvers in computing optimal Markov networks for datasets from
the literature. The test runs were with an Intel Xeon E3-1230 CPU running at 3.20 GHz.
1. For the MAXSAT encodings, we tried out SAT4J (version 2.3.2) [24] and PWBO (version
2.2) [25]. The latter was run in its default configuration as well as in the UB configuration.
2. For SMT, we used the O PTI M ATH SAT solver (version 5) [26].
3. For ASP, we used the C LASP (version 2.1.3) [27] and HC LASP (also v. 2.1.3) [28]
solvers. The latter allows declaratively specifying search heuristics. We also tried the
LP2NORMAL tool that reduces cardinality constraints to more basic constraints [29].
We consider two datasets, one containing risk factors in heart diseases and the other variables related
to economical behavior [18], to be abbreviated by heart and econ in the sequel. For heart, the globally optimal network has been verified via (expensive) exhaustive enumeration. For econ, however,
exhaustive enumeration is impractical due to the extremely large search space, and consequently the
optimality of the Markov network found by stochastic search in [4] had been open until now. For
both datasets, we computed the respective score file that specifies the score of each clique candidate,
i.e., the log-value of its potential function, and the list of variables involved in that clique. The score
files were then translated to be run with the different solvers. The MAXSAT and ASP solvers only
support integer scores obtained by multiplying the original scores by 1000 and rounding. The SMT
solver OptiMathSAT used the original floating point scores. The results are given in Table 1.
The heart data involves 6 variables giving rise to 26 = 64 clique candidates in total and a search
space of 215 undirected networks of which a subset are decomposable. For instance, the ASP solver
7
O PTI M ATH SAT
PWBO (default)
PWBO (UB)
SAT4J
LP2NORMAL+C LASP
C LASP
HC LASP
heart
74
158
63
28
111
5.6
1.6
econ
310 ? 103
heart
3930 kB
3120 kB
3120 kB
3120 kB
8120 kB
197 kB
203 kB
econ
139 MB
130 MB
130 MB
130 MB
1060 MB
4.2 MB
4.2 MB
Table 1: Summary of results: Runtimes in seconds and sizes of solver input files
HC LASP traversed a considerably smaller search space that consisted of 26651 (partial) networks.
This illustrates the power of branch-and-bound type algorithms behind the solvers and their ability
to prune the search space. On the other hand, the econ dataset is based on 8 variables giving rise to
a much larger search space 228 . We were able to solve this instance optimally with one solver only,
HC LASP, which allows for a more refined control of the search heuristic: we forced HC LASP to try
cliques in an ascending order by size, with greatest cliques first. This allowed us to find the global
optimum in about 14 hours, after which 3 days is spent on the proof of optimality.
6
Conclusions
Boolean constraint methods appear not to have been earlier applied to learning of undirected Markov
networks. We have introduced a generic approach in which the learning problem is expressed in
terms of constraints on variables that determine the structure of the learned network. The related
problem of structure learning of Bayesian networks has been addressed by general-purpose combinatorial search methods, including MAXSAT [30] and a constraint-programming solver with a
linear-programming solver as a subprocedure [31, 32]. We introduced explicit translations of the
generic constraints to the languages of MAXSAT, SMT and ASP, and demonstrated their use through
existing solver technology. Our method thus opens up a novel venue of research to further develop
and optimize the use of such technology for network learning. A wide variety of possibilities does
exist also for using these methods in combination with stochastic or heuristic search.
References
[1] J. N. Darroch, Steffen L. Lauritzen, and T. P. Speed. Markov fields and log-linear interaction
models for contingency tables. The Annals of Statistics, 8:522?539, 1980.
[2] Steffen L. Lauritzen and Nanny Wermuth. Graphical models for associations between variables, some of which are qualitative and some quantitative. The Annals of Statistics, 17:31?57,
1989.
[3] Jukka Corander. Bayesian graphical model determination using decision theory. Journal of
Multivariate Analysis, 85:253?266, 2003.
[4] Jukka Corander, Magnus Ekdahl, and Timo Koski. Parallel interacting MCMC for learning of
topologies of graphical models. Data Mining and Knowledge Discovery, 17:431?456, 2008.
[5] Petros Dellaportas and Jonathan J. Forster. Markov chain Monte Carlo model determination
for hierarchical and graphical log-linear models. Biometrika, 86:615?633, 1999.
[6] Paolo Giudici and Robert Castello. Improving Markov chain Monte Carlo model search for
data mining. Machine Learning, 50:127?158, 2003.
[7] Paolo Giudici and Peter J. Green. Decomposable graphical Gaussian model determination.
Biometrika, 86:785?801, 1999.
[8] Mikko Koivisto and Kismat Sood. Exact Bayesian structure discovery in Bayesian networks.
Journal of Machine Learning Research, 5:549?573, 2004.
[9] David Madigan and Adrian E. Raftery. Model selection and accounting for model uncertainty
in graphical models using Occam?s window. Journal of the American Statistical Association,
89:1535?1546, 1994.
8
[10] Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. Inducing features of random
fields. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(4):380?393, 1997.
[11] Su-In Lee, Varun Ganapathi, and Daphne Koller. Efficient structure learning of Markov networks using L1 -regularization. In Advances in Neural Information Processing Systems 19,
pages 817?824. MIT Press, 2006.
[12] M. Schmidt, A. Niculescu-Mizil, and K. Murphy. Learning graphical model structure using
L1 -regularization paths. In Proceedings of the National Conference on Artificial Intelligence,
page 1278. AAAI Press / MIT Press, 2007.
[13] Holger H?ofling and Robert Tibshirani. Estimation of sparse binary pairwise Markov networks
using pseudo-likelihoods. Journal of Machine Learning Research, 10:883?906, 2009.
[14] Armin Biere, Marijn J. H. Heule, Hans van Maaren, and Toby Walsh, editors. Handbook of
Satisfiability. IOS Press, 2009.
[15] Chu Min Li and Felip Many`a. MaxSAT, Hard and Soft Constraints, chapter 19, pages 613?631.
In Biere et al. [14], 2009.
[16] Clark Barrett, Roberto Sebastiani, Sanjit A. Seshia, and Cesare Tinelli. Satisfiability Modulo
Theories, chapter 26, pages 825?885. In Biere et al. [14], 2009.
[17] Gerhard Brewka, Thomas Eiter, and Miroslaw Truszczynski. Answer set programming at a
glance. Commun. ACM, 54(12):92?103, 2011.
[18] Joe Whittaker. Graphical Models in Applied Multivariate Statistics. Wiley Publishing, 1990.
[19] Martin C. Golumbic. Algorithmic Graph Theory and Perfect Graphs. Academic Press, 1980.
[20] Ronald L. Graham and Pavol Hell. On the history of the minimum spanning tree problem.
Annals of the History of Computing, 7(1):43?57, 1985.
[21] Yukio Shibata. On the tree representation of chordal graphs. Journal of Graph Theory,
12(3):421?428, 1988.
[22] Finn V. Jensen and Frank Jensen. Optimal junction trees. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI-94), pages 360?366, 1994.
[23] Carsten Sinz. Towards an optimal CNF encoding of Boolean cardinality constraints. In Principles and Practice of Constraint Programming ? CP 2005, number 3709 in Lecture Notes in
Computer Science, pages 827?831. Springer-Verlag, 2005.
[24] Daniel Le Berre and Anne Parrain. The Sat4j library, release 2.2 system description. Journal
on Satisfiability, Boolean Modeling and Computation, 7:59?64, 2010.
[25] Ruben Martins, Vasco Manquinho, and In?es Lynce. Parallel search for maximum satisfiability.
AI Communications, 25:75?95, 2012.
[26] Roberto Sebastiani and Silvia Tomasi. Optimization in SMT with LA(Q) cost functions. In
Automated Reasoning, volume 7364 of LNCS, pages 484?498. Springer-Verlag, 2012.
[27] Martin Gebser, Benjamin Kaufmann, and Torsten Schaub. Conflict-driven answer set solving:
From theory to practice. Artif. Intell., 187:52?89, 2012.
[28] Martin Gebser, Benjamin Kaufmann, Ram?on Otero, Javier Romero, Torsten. Schaub, and
Philipp Wanko. Domain-specific heuristics in answer set programming. In Proceedings of
the Twenty-Seventh AAAI Conference on Artificial Intelligence. AAAI Press, 2013.
[29] Tomi Janhunen and Ilkka Niemel?a. Compact translations of non-disjunctive answer set programs to propositional clauses. In Gelfond Festschrift, Vol. 6565 of LNCS, pages 111?130.
Springer-Verlag, 2011.
[30] James Cussens. Bayesian network learning by compiling to weighted MAX-SAT. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 105?112, 2008.
[31] James Cussens. Bayesian network learning with cutting planes. In Proceedings of the TwentySeventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11),
pages 153?160. AUAI Press, 2011.
[32] Mark Bartlett and James Cussens. Advances in Bayesian network learning using integer programming. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence
(UAI 2013), pages 182?191. AUAI Press, 2013.
9
| 4960 |@word torsten:2 version:4 polynomial:2 giudici:2 c0:22 open:2 adrian:1 biere:3 tried:2 accounting:1 reduction:2 initial:1 cyclic:1 contains:1 score:12 configuration:2 daniel:1 genetic:1 denoting:1 existing:3 chordal:14 anne:1 si:10 assigning:1 conjunctive:1 written:2 readily:1 must:4 john:1 chu:1 ronald:1 romero:1 otero:1 enables:2 remove:1 n0:7 vasco:1 greedy:1 discovering:1 leaf:8 intelligence:7 plane:1 timo:1 characterization:5 node:62 philipp:1 daphne:1 height:1 c2:2 qualitative:1 prove:1 consists:2 introduce:3 pairwise:1 excellence:1 indeed:1 behavior:1 morphology:1 steffen:2 globally:1 actual:1 cpu:1 enumeration:2 solver:19 cardinality:10 increasing:1 project:1 window:1 notation:1 moreover:1 maximizes:1 backbone:1 developed:1 finding:3 nj:1 impractical:1 guarantee:2 formalizes:1 quantitative:1 every:18 pseudo:1 ti:5 auai:2 ofling:1 exactly:3 zck:1 biometrika:2 control:1 grant:2 converse:1 appear:1 t1:1 engineering:1 local:1 limit:2 consequence:1 io:1 encoding:12 ak:1 path:7 eb:1 specifying:1 challenging:1 factorization:1 walsh:1 en1:5 unique:2 practice:2 communicated:1 sat4j:3 lncs:2 area:2 word:1 induce:2 griffith:1 madigan:1 selection:1 risk:1 applying:1 seminal:1 restriction:1 optimize:1 xci:5 center:1 maximizing:1 demonstrated:1 straightforward:1 go:1 starting:5 convex:1 formalized:2 decomposable:2 subgraphs:1 rule:1 marijn:1 notion:1 annals:3 gm:4 gerhard:1 modulo:3 exact:1 programming:8 mikko:1 hypothesis:3 pa:6 element:2 recognition:1 satisfying:2 utilized:1 expensive:1 nanny:1 cesare:1 database:1 labeled:4 disjunctive:1 wermuth:2 solved:1 hv:7 connected:6 cycle:8 sood:1 ordering:2 removed:1 disease:2 balanced:16 benjamin:2 complexity:1 solving:1 chordality:4 translated:1 easily:1 joint:2 various:2 chapter:2 separated:1 forced:1 monte:3 artificial:6 xc0:3 sc:2 pci:5 outcome:2 choosing:1 exhaustive:2 refined:1 whose:1 encoded:2 widely:2 larger:2 heuristic:4 say:2 solve:1 ability:1 statistic:3 itself:2 sequence:2 interaction:1 maximal:2 strengthening:1 mb:7 neighboring:1 ath:2 subgraph:1 iff:4 achieve:1 academy:1 supposed:1 schaub:2 description:2 inducing:1 convergence:1 empty:2 optimum:2 requirement:1 generating:1 perfect:1 spent:1 develop:3 pose:1 lauritzen:3 implemented:1 involves:1 differ:1 direction:1 stochastic:4 kb:7 australia:1 enable:2 require:2 assign:1 fix:1 generalization:2 hell:1 c00:2 traversed:1 extension:1 hold:2 considered:3 magnus:1 normal:1 mapping:1 algorithmic:1 finland:7 consecutive:4 purpose:2 estimation:1 label:13 combinatorial:1 largest:1 tool:1 reflects:1 weighted:1 mit:2 gaussian:1 super:2 reaching:1 rather:1 ck:8 asp:7 avoid:1 factorizes:2 encode:2 release:1 likelihood:5 aalto:2 sense:1 inference:1 niculescu:1 koller:1 expressible:2 shibata:1 overall:2 issue:1 development:2 sanjit:1 marginal:4 field:4 once:2 runtimes:1 represents:2 holger:1 simplify:1 gamma:1 national:1 intell:1 individual:1 pietra:3 floating:1 murphy:1 festschrift:1 consisting:2 n1:13 interest:1 investigate:1 possibility:2 mining:2 lynce:1 evaluation:1 truly:1 behind:1 tj:1 chain:3 subtrees:18 implication:1 edge:41 integral:1 partial:1 necessary:1 respective:3 tree:44 indexed:1 instance:3 xeon:1 modeling:2 earlier:2 negating:1 boolean:5 soft:1 sinz:1 cost:1 introducing:1 subset:4 uniform:1 rounding:1 conducted:1 seventh:1 optimally:1 straightforwardly:1 answer:6 considerably:1 thoroughly:1 density:1 fundamental:1 venue:1 sequel:1 lee:2 connecting:2 seshia:1 na:5 armin:1 aaai:3 satisfied:3 containing:5 possibly:1 expert:1 american:1 ganapathi:1 li:1 potential:1 wck:1 gebser:2 includes:1 satisfy:2 later:1 root:2 try:1 dellaportas:1 start:1 capability:1 parallel:2 formed:1 ni:1 kaufmann:2 largely:1 maximized:1 identify:1 generalize:1 bayesian:10 identification:1 vincent:1 carlo:3 economical:1 multiplying:1 history:2 definition:10 involved:1 james:3 associated:3 psi:4 proof:5 petros:1 eiter:1 dataset:4 logical:1 knowledge:2 satisfiability:11 formalize:3 javier:1 attained:1 higher:1 day:1 varun:1 formulation:1 xa:11 until:2 sketch:1 hand:1 ei:5 su:1 lack:1 glance:1 aj:3 artif:1 concept:1 contain:1 true:2 consisted:1 inductive:2 analytically:1 assigned:1 hence:7 equality:1 regularization:2 conditionally:1 rooted:8 generalized:1 koski:1 l1:2 cp:1 reasoning:1 ranging:1 meaning:2 consideration:1 novel:2 clause:1 volume:1 extend:1 association:2 sebastiani:2 ai:1 consistency:1 trivially:2 erc:1 language:2 had:1 reliability:1 funded:3 han:1 base:2 multivariate:3 posterior:3 optimizes:1 commun:1 driven:1 certain:2 verlag:3 binary:1 arbitrarily:1 qci:2 scoring:1 minimum:2 additional:1 prune:1 novelty:1 determine:1 redundant:1 stephen:1 branch:1 interdependency:1 reduces:2 determination:3 academic:1 a1:1 scalable:1 basic:2 xsi:4 janhunen:2 represent:3 c1:10 addition:3 separately:1 annealing:1 addressed:2 brisbane:1 completes:1 rest:1 strict:1 file:3 smt:7 induced:1 undirected:8 lafferty:1 call:1 integer:2 automated:1 variety:1 affect:1 topology:2 identified:2 restrict:1 reduce:1 idea:1 tm:2 opposite:1 expression:1 darroch:2 bartlett:1 peter:1 e3:1 cnf:1 repeatedly:1 useful:1 reduced:3 generate:1 specifies:2 sl:1 exist:3 tibshirani:1 econ:5 vol:1 paolo:2 independency:1 four:1 pj:3 verified:1 tenth:1 ram:1 graph:47 sum:1 run:3 uncertainty:5 decision:1 cussens:3 graham:1 bound:2 ki:2 annual:1 occur:5 constraint:38 jukka:3 helsinki:2 n3:6 wc:5 speed:1 extremely:1 optimality:2 min:1 martin:4 rintanen:1 structured:1 according:3 combination:1 disconnected:1 tomi:2 describes:1 ascertain:1 slightly:1 em:1 remain:1 pti:2 smaller:1 n4:6 s1:2 en2:2 jussi:1 heart:6 previously:2 turn:1 abbreviated:1 needed:1 ascending:1 finn:1 end:2 koivisto:1 junction:6 operation:1 apply:1 observe:1 hierarchical:1 generic:3 occurrence:15 schmidt:1 compiling:1 original:3 thomas:1 denotes:3 dirichlet:3 running:3 cf:2 remaining:1 graphical:10 publishing:1 xc:6 giving:2 build:1 maxsat:8 establish:2 occurs:8 xck:2 dependence:3 forster:1 corander:3 said:1 gradient:1 distance:1 separate:1 simulated:1 gelfond:1 trivial:1 spanning:33 induction:5 index:4 robert:2 frank:1 rise:2 affiliated:2 twenty:1 upper:1 observation:1 markov:33 datasets:5 descent:1 immediate:1 communication:1 gc:4 interacting:1 arbitrary:3 introduced:2 propositional:10 pair:7 cast:1 toolbox:1 david:1 tomasi:1 conflict:1 nyman:1 learned:2 hour:1 trans:1 able:1 proceeds:1 below:1 pattern:1 program:1 including:1 green:1 max:1 power:1 suitable:1 satisfaction:3 difficulty:1 greatest:1 mizil:1 representing:1 technology:5 maximality:1 library:1 raftery:1 roberto:2 review:1 literature:2 prior:5 discovery:2 ruben:1 determining:2 reordering:1 lecture:1 proven:1 acyclic:4 clark:1 foundation:1 contingency:1 sufficient:1 consistent:1 article:2 principle:1 editor:1 occam:1 balancing:19 translation:5 summary:1 supported:2 last:2 zc:3 side:1 allow:1 xc1:2 institute:1 fall:1 neighbor:3 wide:1 abo:3 sparse:1 ghz:1 van:1 calculated:2 depth:3 default:2 concretely:1 made:1 coincide:1 sj:1 compact:1 cutting:1 logic:1 clique:71 dealing:1 global:3 uai:3 sat:7 handbook:1 unnecessary:1 alternatively:1 search:18 sk:1 table:3 johan:1 golumbic:1 forest:4 improving:1 investigated:1 complex:3 separator:22 hc:5 domain:1 pk:2 main:1 whole:1 silvia:1 arise:1 toby:1 n2:8 allowed:1 representative:1 intel:1 en:4 slow:1 henrik:1 wiley:1 natively:1 pv:3 explicit:2 exponential:1 candidate:9 third:1 theorem:1 formula:1 specific:1 jensen:2 list:1 barrett:1 joe:1 false:1 adding:2 effectively:1 ci:9 wc1:1 subtree:10 illustrates:1 twentyseventh:1 nk:5 intersection:5 logarithmic:2 forming:2 expressed:3 ordered:1 springer:3 satisfies:6 acm:1 whittaker:1 conditional:1 goal:1 viewed:1 carsten:1 consequently:2 towards:2 pensar:1 considerable:1 hard:1 included:1 except:1 reducing:1 lemma:9 total:1 experimental:1 e:1 la:1 enk:2 select:1 support:1 mark:1 latter:3 jonathan:1 ub:2 mcmc:3 della:3 |
4,377 | 4,961 | Bayesian Estimation of Latently-grouped Parameters
in Undirected Graphical Models
David Page
Dept of BMI, University of Wisconsin
Madison, WI 53706
[email protected]
Jie Liu
Dept of CS, University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
In large-scale applications of undirected graphical models, such as social networks
and biological networks, similar patterns occur frequently and give rise to similar parameters. In this situation, it is beneficial to group the parameters for more
efficient learning. We show that even when the grouping is unknown, we can infer these parameter groups during learning via a Bayesian approach. We impose a
Dirichlet process prior on the parameters. Posterior inference usually involves calculating intractable terms, and we propose two approximation algorithms, namely
a Metropolis-Hastings algorithm with auxiliary variables and a Gibbs sampling algorithm with ?stripped? Beta approximation (Gibbs SBA). Simulations show that
both algorithms outperform conventional maximum likelihood estimation (MLE).
Gibbs SBA?s performance is close to Gibbs sampling with exact likelihood calculation. Models learned with Gibbs SBA also generalize better than the models
learned by MLE on real-world Senate voting data.
1
Introduction
Undirected graphical models, a.k.a. Markov random fields (MRFs), have many real-world applications such as social networks and biological networks. In these large-scale networks, similar kinds
of relations can occur frequently and give rise to repeated occurrences of similar parameters, but the
grouping pattern among the parameters is usually unknown. For a social network example, suppose
that we collect voting data over the last 20 years from a group of 1,000 people who are related to each
other through different types of relations (such as family, co-workers, classmates, friends and so on),
but the relation types are usually unknown. If we use a binary pairwise MRF to model the data, each
binary node denotes one person?s vote, and two nodes are connected if the two people are linked
in the social network. Eventually we want to estimate the pairwise potential functions on edges,
which can provide insights about how the relations between people affect their decisions. This can
be done via standard maximum likelihood estimation (MLE), but the latent grouping pattern among
the parameters is totally ignored, and the model can be overparametrized. Therefore, two questions
naturally arise. Can MRF parameter learners automatically identify these latent parameter groups
during learning? Will this further abstraction make the model generalize better, analogous to the
lessons we have learned from hierarchical modeling [9] and topic modeling [5]?
This paper shows that it is feasible and potentially beneficial to identify the latent parameter groups
during MRF parameter learning. Specifically, we impose a Dirichlet process prior on the parameters
to accommodate our uncertainty about the number of the parameter groups. Posterior inference can
be done by Markov chain Monte Carlo with proper approximations. We propose two approximation
algorithms, a Metropolis-Hastings algorithm with auxiliary variables and a Gibbs sampling algorithm with stripped Beta approximation (Gibbs SBA). Algorithmic details are provided in Section
3 after we review related parameter estimation methods in Section 2. In Section 4, we evaluate
our Bayesian estimates and the classical MLE on different models, and both algorithms outperform
classical MLE. The Gibbs SBA algorithm performs very close to the Gibbs sampling algorithm with
exact likelihood calculation. Models learned with Gibbs SBA also generalize better than the models
learned by MLE on real-world Senate voting data in Section 5. We finally conclude in Section 6.
1
2
Maximum Likelihood Estimation and Bayesian Estimation for MRFs
Let X = {0, 1, ..., m ? 1} be a discrete space. Suppose that we have an MRF defined on a random
vector X ? X d described by an undirected graph G(V, E) with d nodes in the node set V and r
edges in the edge set E. The probability of one sample x from the MRF parameterized by ? is
P (x; ?) = P? (x; ?)/Z(?),
(1)
Q
where Z(?) is the partition function. P? (x; ?)= c?C(G) ?c (x; ? c ) is some unnormalized measure,
and C(G) is some subset of cliques in G, and ?c is the potential function defined on the clique c
parameterized by ? c . In this paper, we consider binary pairwise MRFs for simplicity, i.e. C(G)=E
and m=2. We also assume that each potential function ?c is parameterized by one parameter ?c ,
I(X 6=X )
namely ?c (X; ?c )=?c I(Xu =Xv ) (1??c ) u v where I(Xu =Xv ) indicates whether the two nodes
u and v connected by edge c take the same value, and 0<?c <1, ?c=1, ...,r. Thus, ?={?1 , ..., ?r }.
Suppose that we have n independent samples X={x1 , ..., xn } from (1), and we want to estimate ?.
Maximum Likelihood Estimate: The MLE of ? maximizes the log-likelihood function L(?|X)
which is concave w.r.t. ?. Therefore, we can use gradient ascent to find the global maximum of
the likelihood function and find the MLE of ?. The partial derivative of L(?|X) with respect to ?i
Pn
1
j
is ?L(?|X)
j=1 ?i (x )?E? ?i =EX ?i ?E? ?i where ?i is the sufficient statistic corresponding
??i = n
to ?i after we rewrite the density into the exponential family form, and E? ?i is the expectation of
?i with respect to the distribution specified by ?. However the exact computation of E? ?i takes
time exponential in the treewidth of G. A few sampling-based methods have been proposed, with
different ways of generating particles and computing E? ? from the particles, including MCMCMLE [11, 34], particle-filtered MCMC-MLE [1], contrastive divergence [15] and its variations such
as persistent contrastive divergence (PCD) [29] and fast PCD [30]. Note that contrastive divergence
is related to pseudo-likelihood [4], ratio matching [17, 16], and together with other MRF parameter
estimators [13, 31, 12] can be unified as minimum KL contraction [18].
Bayesian Estimate: Let ?(?) be a prior of ?; then its posterior is P (?|X) ? ?(?)P? (X; ?)/Z(?).
The Bayesian estimate of ? is its posterior mean. Exact sampling from P (?|X) is known as doublyintractable for general MRFs [21]. If we use the Metropolis-Hastings algorithm, then MetropolisHastings ratio is
a(? ? |?) =
?(? ? )P? (X; ? ? )Q(?|? ? )/Z(? ? )
,
?(?)P? (X; ?)Q(? ? |?)/Z(?)
(2)
where Q(? ? |?) is some proposal distribution from ? to ? ? , and with probability min{1, a(? ? |?)} we
accept the move from ? to ? ? . The real hurdle is that we have to evaluate the intractable Z(?)/Z(? ? )
in the ratio. In [20], M?ller et al. introduce one auxiliary variable y on the same space as x, and
the state variable is extended to (?, y). They set the new proposal distribution for the extended
state Q(?, y|? ? ,y? )=Q(?|? ? )P? (y; ?)/Z(?) to cancel Z(?)/Z(? ? ) in (2). Therefore by ignoring
y, we can generate the posterior samples of ? via Metropolis-Hastings. Technically, this auxiliary
variable approach requires perfect sampling [25], but [20] pointed out that other simpler Markov
chain methods also work with the proviso that it converges adequately to the equilibrium distribution.
3
Bayesian Parameter Estimation for MRFs with Dirichlet Process Prior
In order to model the latent parameter groups, we impose a Dirichlet process prior on ?, which
accommodates our uncertainty about the number of groups. Then, the generating model is
G ? DP(?0 , G0 )
?i |G ? G, i = 1, ..., r
(3)
xj |? ? F (?), j = 1, ..., n,
where F (?) is the distribution specified by (1). G0 is the base distribution (e.g. Unif(0, 1)), and ?0 is
the concentration parameter. With probability 1.0, the distribution G drawn from DP(?0 , G0 ) is discrete, and places its mass on a countably infinite collection of atoms drawn from G0 . In this model,
X={x1 , ..., xn } is observed, and we want to perform posterior inference for ? = (?1 , ?2 , ..., ?r ),
2
and regard its posterior mean as its Bayesian estimate. We propose two Markov chain Monte Carlo
(MCMC) methods. One is a Metropolis-Hastings algorithm with auxiliary variables, as introduced
in Section 3.1. The second is a Gibbs sampling algorithm with stripped Beta approximation, as introduced in Section 3.2. In both methods, the state of the Markov chain is specified by two vectors,
c and ?. In vector c = (c1 , ..., cr ), ci denotes the group to which ?i belongs. ? = (?1 , ..., ?k )
records the k distinct values in {?1 , ..., ?r } with ?ci = ?i for i = 1, ..., r. This way of specifying the
Markov chain is more efficient than setting the state variable directly to be (?1 , ?2 , ..., ?r ) [22].
3.1
Metropolis-Hastings (MH) with Auxiliary Variables
In the MH algorithm (see Algorithm 1), the initial state of the Markov chain is set by performing Kmeans clustering on MLE of ? (e.g. from the PCD algorithm [29]) with K=b?0 ln rc. The Markov
chain resembles Algorithm 5 in [22], and it is ergodic. We move the Markov chain forward for T
steps. In each step, we update c first and then update ?. We update each element of c in turn; when
resampling ci , we fix c?i , all elements in c other than ci . When updating ci , we repeatedly for M
times propose a new value c?i according to proposal Q(c?i |ci ) and accept the move with probability
min{1, a(c?i |ci )} where a(c?i |ci ) is the MH ratio. After we update every element of c in the current
iteration, we draw a posterior sample of ? according to the current grouping c. We iterate T times,
and get T posterior samples of ?. Unlike the tractable Algorithm 5 in [22], we need to introduce
auxiliary variables to bypass MRF?s intractable likelihood in two places, namely calculating the MH
ratio (in Section 3.1.1) and drawing samples of ?|c (in Section 3.1.2).
3.1.1
Calculating Metropolis-Hastings Ratio
The MH ratio of proposing a new value c?i for ci
according to proposal Q(c?i |ci ) is
Algorithm 1 The Metropolis-Hastings algorithm
Input: observed data X={x1 , ..., xn }
?
?
?
?(ci , c?i )P (X; ?.i )Q(ci |ci )
? (1) , ..., ?
? (T ) ; T samples of ?|X
Output: ?
a(c?i |ci ) =
?
?(ci , c?i )P (X; ?)Q(ci |ci )
Procedure:
? MLE of ?
?
Perform PCD algorithm to get ?,
?(ci |c?i )P? (X; ?.?i )Q(ci |c?i )/Z(?.?i )
=
,
?
Init. c and ? via K-means on ?; K=b?0 ln rc
?(ci |c?i )P? (X; ?)Q(c?i |ci )/Z(?)
for t = 1 to T do
where ?.?i is the same as ? except its i-th elefor i = 1 to r do
ment is replaced with ?c?i . The conditional prior
for l = 1 to M do
Draw a candidate c?i from Q(ci |c?i )
?(c?i |c?i ) is
If
c?i 6? c, draw a value for ?ci from G0
(
n?i,c
Set ci =c?i with prob min{1, a(c?i |ci )}
r?1+?0 , if c ? c?i
?(ci =c|c?i )=
end for
?0
r?1+?0 , if c 6? c?i
end for
Draw a posterior sample of ? according to
where n?i,c is the number of cj for j6=i and
(t)
?
current c, and set ??i =?ci for i=1, ..., r.
cj =c. We choose proposal Q(ci |ci ) to be the
end for
conditional prior ?(c?i |c?i ), and the MetropolisHastings ratio can be further simplified as
a(c?i |ci )=P? (X; ?.?i )Z(?)/P? (X; ?)Z(?.?i ). However, Z(?)/Z(?.?i ) is intractable. Similar to [20],
we introduce an auxiliary variable Z on the same space as X, and the state variable is extended to
(c, Z). When proposing a move, we propose c?i first and then propose Z? with proposal P (Z; ?.?i )
? where ?
? is
to cancel the intractable Z(?)/Z(?.?i ). We set the target distribution of Z to be P (Z; ?)
some estimate of ? (e.g. from PCD [29]). Then, the MH ratio with the auxiliary variable is
a(c?i , Z? |ci , Z) =
? P? (X; ?.? )P? (Z; ?)
? P? (X; ?.? )P? (Z; ?)
P? (Z? ; ?)
P (Z? ; ?)
i
i
=
.
?
? P? (X; ?)P? (Z? ; ?. )
? P? (X; ?)P? (Z? ; ?.? )
?
P (Z; ?)
P
(Z;
?)
i
i
Thus, the intractable computation of the MH ratio is replaced by generating particles Z? and Z under
?.?i and ? respectively. Ideally, we should use perfect sampling [25], but it is intractable for general
MRFs. As a compromise, we use standard Gibbs sampling with long runs to generate these particles.
3.1.2
Drawing Posterior Samples of ?|c
We draw posterior samples of ? under grouping c via the MH algorithm, again following [20]. The
state of the Markov chain is ?. The initial state of the Markov chain is set by running PCD [29] with
3
2
parameters tied according to c. The proposal Q(?? |?) is a k-variate Gaussian N (?, ?Q
Ik ) where
2
?Q Ik is the covariance matrix. The auxiliary variable Y is on the same space as X, and the state is
extended to (?, Y). The proposal distribution for the extended state variable is Q(?, Y|?? , Y? ) =
? where ?
? is some estimate
Q(?|?? )P? (Y; ?)/Z(?). We set the target distribution of Y to be P (Y; ?)
of ? such as the estimate from the PCD algorithm [29]. Then, the MH ratio for the extended state is
a(?? , Y? |?, Y) = I(?? ? ?)
? P? (X; ?? )P? (Y; ?)
P? (Y? ; ?)
,
? P? (X; ?)P? (Y? ; ?? )
P? (Y; ?)
where I(?? ? ?) indicates that every dimension of ?? is in the domain of G0 . We set the state to be
the new values with probability min{1, a(?? , Y? |?, Y)}. We move the Markov chain for S steps,
and get S samples of ? by ignoring Y. Eventually we draw one sample from them randomly.
3.2
Gibbs Sampling with Stripped Beta Approximation
In the Gibbs sampling algorithm (see Algorithm 2), the initialization of the Markov Algorithm 2 The Gibbs sampling algorithm
chain is exactly the same as in the MH alInput: observed data X = {x1 , x2 , ..., xn }
gorithm in Section 3.1. The Markov chain
? (1) , ..., ?
? (T ) ; T posterior samples of ?|X
Output: ?
resembles Algorithm 2 in [22] and it can
Procedure:
be shown to be ergodic. We move the
?
Perform PCD algorithm to get MLE ?
Markov chain forward for T steps. In each
? K=b?0 ln rc
Init.
c
and
?
via
K-means
on
?;
of the T steps, we update c first and then
for t = 1 to T do
update ?. When we update c, we fix the
for i = 1 to r do
values in ?, except we may add one new
If current ci is unique in c, remove ?ci from ?
value to ? or remove a value from ?. We
Update ci according to (4).
update each element of c in turn. When
If new ci 6?c, draw a value for ?ci and add to ?
we update ci , we first examine whether ci
end
for
is unique in c. If so, we remove ?ci from
Draw
a posterior sample of ? according to current
? first. We then update ci by assigning it
(t)
c, and set ??i = ?ci for i = 1, ..., r
to an existing group or a new group with
end for
a probability proportional to a product of
two quantities, namely
(
n?i,c
P (X; ?c , ?c?i ), if c ? c?i
0 R
P (ci = c|c?i , X, ?c?i ) ? r?1+?
(4)
?0
r?1+?0 P (X; ?i , ?c?i ) dG0 (?i ), if c 6? c?i .
The first quantity is n?i,c , the number of members already in group c. For starting a new group,
the quantity is ?0 . The second quantity is the likelihood of X after assigning ci to the new value c
conditional on ?c?i . When considering a new group, we integrate the likelihood w.r.t. G0 . After ci
is resampled, it is either set to be an existing group or a new group. If a new group is assigned, we
draw a new value for ?ci , and add it to ?. After updating every element of c in the current iteration,
we draw a posterior sample of ? under the current grouping c. In total, we run T iterations, and
get T posterior samples of ?. This Gibbs sampling
algorithm involves two intractable calculations,
R
namely (i) calculating P (X; ?c , ?c?i ) and P (X; ?i , ?c?i ) dG0 (?i ) in (4) and (ii) drawing posterior
samples for ?. We use a stripped Beta approximation in both places, as in Sections 3.2.1 and 3.2.2.
R
3.2.1 Calculating P (X; ?c , ?c?i ) and P (X; ?i , ?c?i ) dG0 (?i ) in (4)
In Formula (4), we evaluate P (X; ?c , ?c?i ) for different ?c values with ?c?i fixed and X =
{x1 , x2 , ..., xn } observed. For ease of notation, we rewrite this quantity as a likelihood function
of ?i , L(?i |X, ? ?i ), where ? ?i = {?1 , ..., ?i?1 , ?i+1 , ..., ?r } is fixed. Suppose that the edge i connects variables Xu and Xv , and we denote X?uv to be the variables other than Xu and Xv . Then
Yn
L(?i |X, ? ?i )=
P (xju , xjv |xj?uv ; ?i , ? ?i )P (xj?uv ; ?i , ? ?i )
j=1
Yn
Yn
?
P (xju , xjv |xj?uv ; ?i , ? ?i )P (xj?uv ; ? ?i ) ?
P (xju , xjv |xj?uv ; ?i , ? ?i ).
j=1
j=1
Above we approximate P (xj?uv ; ?i , ? ?i ) with P (xj?uv ; ? ?i ) because the density of
depends on ? ?i . The term P (xj?uv ; ? ?i ) can be dropped since ? ?i is fixed, and
4
X?uv mostly
we only have
to consider P (xju , xjv |xj?uv ; ?i , ? ?i ). Since ? ?i is fixed and we are conditioning on xj?uv , they
together can be regarded as a fixed potential function telling how likely the rest of the graph thinks
Xu and Xv should take the same value. Suppose that this fixed potential function (the message from
the rest of the network xj?uv ) is parameterized as ?i (0 < ?i < 1). Then
n
Y
P (xju , xjv |xj?uv ; ?i , ? ?i )?
j=1
n
Y
?
I(xju =xjv )
I(xju 6=xjv )
(1??)
n
P
=?
j=1
I(xju =xjv )
n
P
(1??)
j=1
I(xju 6=xjv )
(5)
j=1
where P
?=?i ?i /{?i ?i +(1??i )(1??
Pn i )}. The end of (5) resembles a Beta distribution with paramn
eters ( j=1 I(xju =xjv )+1, n? j=1 I(xju =xjv )+1) except that only part of ?, namely ?i , is random. Now we want to use a Beta distribution to approximate the likelihood with respect to ?i , and
we need to remove the contribution of ?i and only consider the contribution from ?i . We choose
Beta(bn??i c+1, n?bn??i c+1) where ??i is MLE of ?i (e.g. from the PCD algorithm). This approximation is named the Stripped Beta Approximation. The simulation results in Section 4.2 indicate that
the performance of the stripped Beta approximation is very close to using exact calculation. Also
this approximation only requires as much computation as in the tractable tree-structure MRFs, and
it does not require
generating expensive particles as in the MH algorithm with auxiliary variables.
R
The integral P (X; ?i , ?c?i ) dG0 (?i ) in (4) can be calculated via Monte Carlo approximation. We
draw a number of samples of ?i from G0 , and evaluate P (X; ?i , ?c?i ) and take the average.
3.2.2
Drawing Posterior Samples of ?|c
The stripped Beta approximation also allows us to draw posterior samples from ?|c approximately.
? =
Suppose that there are k groups according to c, and we have estimates for ?, denoted as ?
?
?
(?1 , ..., ?k ). We denote the numbers of elements in the k groups by m = {m1 , ..., mk }. For group
i, we draw a posterior sample for ?i from Beta(bmi n??i c+1, mi n?bmi n??i c+1).
4
Simulations
We investigate the performance of our Bayesian estimators on three models: (i) a tree-MRF, (ii)
a small grid-MRF whose likelihood is tractable, and (iii) a large grid-MRF whose likelihood is
intractable. We first set the ground truth of the parameters, and then generate training and testing
samples. On training data, we apply our grouping-aware Bayesian estimators and two baseline
estimators, namely a grouping-blind estimator and an oracle estimator. The grouping-blind estimator
does not know groups exist in the parameters, and estimates the parameters in the normal MLE
fashion. The oracle estimator knows the ground truth of the groupings, and ties the parameters from
the same group and estimates them via MLE. For the tree-MRF, our Bayesian estimator is exact
since the likelihood is tractable. For the small grid-MRF, we have three variations for the Bayesian
estimator, namely Gibbs sampling with exact likelihood computation, MH with auxiliary variables,
and Gibbs sampling with stripped Beta approximation. For the large grid-MRF, the computational
burden only allows us to apply Gibbs sampling with stripped Beta approximation.
We compare the estimators by three measures. The first is the average absolute error of estimate
Pr
1/r i=1 |?i ? ??i | where ??i is the estimate of ?i . The second measure is the log likelihood of the
testing data, or the log pseudo-likelihood [4] of the testing data when exact likelihood is intractable.
Thirdly, we evaluate how informative the grouping yielded by the Bayesian estimator is. We use the
? and the ground truth grouping
variation of information metric [19] between the inferred grouping C
?
?
? we contrast it
C, namely VI(C, C). Since VI(C, C) is sensitive to the number of groups in C,
? C) where C
? is a random grouping with its number of groups the same as C.
? Eventually,
with VI(C,
? via the VI difference, namely VI(C,
? C)?VI(C,
? C). A larger value of VI difference
we evaluate C
indicates a more informative grouping yielded by our Bayesian estimator. Because we have one
grouping in each of the T MCMC steps, we average the VI difference yielded in each of the T steps.
4.1
Simulations on Tree-structure MRFs
For the structure of the MRF, we choose a perfect binary tree of height 12 (i.e. 8,191 nodes and
8,190 edges). We assume there are 25 groups among the 8,190 parameters. The base distribution
G0 is Unif(0, 1). We first generate the true parameters for the 25 groups from Unif(0, 1). We then
randomly assign each of the 8,190 parameters to one of the 25 groups. We then generate 1,000
5
?
?
100
200
300
400
500
600
700
800
900
1000
6.5
6.0
?4130
7.0
?4120
MLE
Oracle
Bayesian
?
5.5
?
VI Difference
?
?
Bayesian
5.0
0.010
?
?
?4140
?
?
?
?
?
?
?
?
?4150
0.020
?
0.000
Error of Estimate
?
Log?likelihood of Test Data
MLE
Oracle
Bayesian
?
?
?4160
0.030
?
?
100
200
Training Sample Size
(a)
300
400
500
600
700
800
900
1000
100
200
300
Training Sample Size
(b)
400
500
600
700
800
900
1000
Training Sample Size
(c)
Figure 1: Performance of the grouping-blind MLE, the oracle MLE and our Bayesian estimator on tree-structure
MRFs in terms of (a) error of estimate and (b) log-likelihood of test data. Subfigure (c) shows the VI difference
between the grouping yielded by our Bayesian estimator and random grouping.
testing samples and n training samples (n=100, 200, ..., 1,000). Eventually, we apply the groupingblind MLE, the oracle MLE, and our grouping-aware Bayesian estimator on the training samples.
For tree-structure MRFs, both MLE and Bayesian estimation have a closed form solution. For the
Bayesian estimator, we set the number of Gibbs sampling steps to be 500 and set ?0 =1.0. We
replicate the experiment 500 times, and the averaged results are in Figure 1.
Our grouping-aware Bayesian estimator has a
lower estimate error and a higher log likelihood of
test data, compared with the grouping-blind MLE,
demonstrating the ?blessing of abstraction?. Our
Bayesian estimator performs worse than oracle
MLE, as we expect. In addition, as the training sample size increases, the performance of our Figure 2: Number of groups inferred by the Bayesian
Bayesian estimator approaches that of the oracle estimator and its run time.
MLE. The VI difference in Figure 1(c) indicates that the Bayesian estimator also recovers the latent
grouping to some extent, and the inferred groupings become more and more reliable as the training
size increases. The number of groups inferred by the Bayesian estimator and its running time are in
Figure 2. We also investigate the asymptotic performance of the estimators and their performance
when there are no parameter groups. The results are provided in the supplementary materials.
?
?
?
?
800
900
?
?
?
?
?
?
?
440
?
?
?
430
?
?
?
?
?
?
?
20
420
?
?
?
Run Time (in seconds)
?
?
?
?
25
410
?
?
?
450
460
?
Number of Groups Inferred
?
?
?
?
?
?
30
?
?
400
15
100
200
300
400
500
600
700
100
1000
200
300
Training Sample Size
4.2
400
500
600
700
Training Sample Size
800
900
1000
Simulations on Small Grid-MRFs
For the structure of the MRF, we choose a 4?4 grid with 16 nodes and 24 edges. Exact likelihood is tractable in this small model, which allows us to investigate how good the two types of
approximation are. We apply the grouping-blind MLE (the PCD algorithm), the oracle MLE (the
PCD algorithm with the parameters from same group tied) and three Bayesian estimators: Gibbs
sampling with exact likelihood computation (Gibbs ExactL), Metropolis-Hastings with auxiliary
variables (MH AuxVar), and Gibbs sampling with stripped Beta approximation (Gibbs SBA). We
assume there are five parameter groups. The base distribution is Unif(0, 1). We first generate the
true parameters for the five groups from Unif(0, 1). We then randomly assign each of the 24 parameters to one of the five groups. We then generate 1,000 testing samples and n training samples
(n=100, 200, ..., 1,000). For Gibbs ExactL and Gibbs SBA, we set the number of Gibbs sampling
steps to be 100. For MH AuxVar, we set the number of MH steps to be 500 and its proposal number
M to be 5. The parameter ?Q in Section 3.1.2 is set to be 0.001 and the parameter S is set to be
100. For all three Bayesian estimators, we set ?0 =1.0. We replicate the experiment 50 times, and
the averaged results are in Figure 4.
(a) Gibbs_ExactL
(b) MH_AuxVar
(c) Gibbs_SBA
Our grouping-aware Bayesian estimators have
a lower estimate error and a higher log likelihood of test data, compared with the groupingblind MLE, demonstrating the blessing of abstraction. All three Bayesian estimators perform worse than oracle MLE, as we expect. The Figure 3: The number of groups inferred by
VI difference in Figure 4(c) indicates that the Gibbs ExactL, MH AuxVar and Gibbs SBA.
Bayesian estimators also recover the grouping to some extent, and the inferred groupings become
more and more reliable as the training size increases. In Figure 3, we provide the boxplots of the
number of groups inferred by Gibbs ExactL, MH AuxVar and Gibbs SBA. All three methods recover a reasonable number of groups, and Gibbs SBA slightly over-estimates the number of groups.
?
# Groups Inferred
?
6
10
10
?
8
?
?
?
?
?
?
8
8
?
6
6
?
?
?
?
?
?
?
?
?
?
4
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
6
?
4
?
10
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
4
100 200 300 400 500 600 700 800 900 1000
100 200 300 400 500 600 700 800 900 1000
100 200 300 400 500 600 700 800 900 1000
Training Sample Size
Training Sample Size
Training Sample Size
?
?
100
200
300
400
500
600
700
800
900
2.2
?6800
2.0
1.8
1.6
?6840
1000
MLE
Oracle
Gibbs_ExactL
Gibbs_SBA
MH_AuxVar
?
1.4
?
VI Difference
?
0.015
?
1.2
?
?
?
Gibbs_ExactL
Gibbs_SBA
MH_AuxVar
1.0
?
?
?
?
?
?
?
?6880
0.025
?
Log?likelihood of Test Data
0.035
?
?
?6920
MLE
Oracle
Gibbs_ExactL
Gibbs_SBA
MH_AuxVar
?
0.005
Error of Estimate
?
?
100
200
300
Training Sample Size
(a)
400
500
600
700
800
900
1000
100
200
300
Training Sample Size
(b)
400
500
600
700
800
900
1000
Training Sample Size
(c)
?
?
100
200
300
400
500
600
700
800
900
1000
2.0
?198000
?
MLE
Oracle
Gibbs_SBA
?
1.5
1.0
?
VI Difference
?
Gibbs_SBA
0.5
?
0.02
?
?202000
?
?
?
?
?
?
?
?
?206000
0.04
0.03
?
?
?210000
MLE
Oracle
Gibbs_SBA
?
?
0.01
Error of Estimate
?
Log?pseudolikelihood of Test Data
Figure 4: Performance of grouping-blind MLE, oracle MLE, Gibbs ExactL, MH AuxVar, and Gibbs SBA on
the small grid-structure MRFs in terms of (a) error of estimate and (b) log-likelihood of test data. Subfigure (c)
shows the VI difference between the grouping yielded by our Bayesian estimators and random grouping.
?
100
200
Training Sample Size
(a)
300
400
500
600
700
800
900
1000
100
200
300
Training Sample Size
(b)
400
500
600
700
800
900
1000
Training Sample Size
(c)
Figure 5: Performance of the grouping-blind MLE, the oracle MLE and the Bayesian estimator (Gibbs SBA)
on large grid-structure MRFs in terms of (a) error of estimate and (b) log-likelihood of test data. Subfigure (c)
shows the VI difference between the grouping yielded by our Bayesian estimator and random grouping.
Among the three Bayesian estimators, Table 1: The run time (in seconds) of Gibbs ExactL,
Gibbs ExactL has the lowest estimate er- MH AuxVar and Gibbs SBA when training size is n.
ror and the highest log likelihood of test
n=100
n=500 n=1,000
G IBBS E XACT L 88,136.3 91,055.0 92,503.4
data. Gibbs SBA also performs consid540.2
3,342.2
4,546.7
MH AUX VAR
erably well, and its performance is close
G
IBBS
SBA
8.1
10.8
14.2
to the performance of Gibbs ExactL.
MH AuxVar works slightly worse, especially when there is less training data. However, MH AuxVar recovers better groupings than
Gibbs SBA when there are more training data. The run times of the three Bayesian estimators are
listed in Table 1. Gibbs ExactL has a computational complexity that is exponential in the dimensionality d, and cannot be applied to situations when d > 20. MH AuxVar is also computationally
intensive because it has to generate expensive particles. Gibbs SBA runs fast, with its burden mainly
from running PCD under a specific grouping in each Gibbs sampling step, and it scales well.
4.3
Simulations on Large Grid-MRFs
The large grid consists of 30 rows and 30 columns (i.e. 900 nodes and 1,740 edges). Exact likelihood is intractable for this large model, and we cannot run Gibbs ExactL. The high dimension also
prohibits MH AuxVar. Therefore, we only run the Gibbs SBA algorithm on this large grid-structure
MRF. We assume that there are 10 groups among the 1,740 parameters. We also evaluate the estimators by the log pseudo-likelihood of testing data. The other settings of the experiments stay the
same as Section 4.2. We replicate the experiment 50 times, and the averaged results are in Figure 5.
For all 10 training sets, our Bayesian estimator Gibbs SBA has a lower estimate error and
a higher log likelihood of test data, compared
with the grouping-blind MLE (via the PCD algorithm). Gibbs SBA has a higher estimate error
and a lower pseudo-likelihood of test data than
the oracle MLE. The VI difference in Figure 5(c) Figure 6: Number of groups inferred by Gibbs SBA
indicates that Gibbs SBA gradually recovers the and its run time.
grouping as the training size increases. The number of groups inferred by Gibbs SBA and its running time are provided in Figure 6. Similarly to the observation in Section 4.2, Gibbs SBA overestimates the number of groups. Gibbs SBA finishes the simulations on 900 nodes and 1,740 edges
in hundreds of minutes (depending on the training size), which is considered to be very fast.
80
20
?
?
?
100
200
300
400
500
600
700
Training Sample Size
7
?
800
900
1000
20000
?
?
?
?
?
?
Run Time (in seconds)
?
?
?
15000
Number of Groups Inferred
?
?
?
40
25000
30000
?
?
60
100
200
300
400
500
600
700
Training Sample Size
800
900
1000
Table 2: Log pseudo-likelihood (LPL) of training and testing data from MLE (PCD) and Bayesian estimate
(Gibbs SBA), the number of groups inferred by Gibbs SBA, and its run time in the Senate voting experiments.
E XP 1
E XP 2
5
LPL-T RAIN
MLE
G IBBS SBA
-10716.75
-10721.34
-8306.17
-8322.34
LPL-T EST
MLE
G IBBS SBA
-9022.01
-8989.87
-11490.47
-11446.45
#
GROUPS
7.89
7.29
RUN T IME ( MINS )
204
183
Real-world Application
We apply the Gibbs SBA algorithm on US Senate voting data from the 109th Congress (available
at www.senate.gov). The 109th Congress has two sessions, the first session in 2005 and the second
session in 2006. There are 366 votes and 278 votes in the two sessions, respectively. There are 100
senators in both sessions, but Senator Corzine only served the first session and Senator Menendez
only served the second session. We remove them. In total, we have 99 senators in our experiments,
and we treat the votes from the 99 senators as the 99 variables in the MRF. We only consider contested votes, namely we remove the votes with less than ten or more than ninety supporters. In total,
there are 292 votes and 221 votes left in the two sessions, respectively. The structure of the MRF is
from Figure 13 in [2]. There are in total 279 edges. The votes are coded as ?1 for no and 1 for yes.
We replace all missing votes with ?1, staying consistent with [2]. We perform two experiments.
First, we train the MRF using the first session data, and test on the second session data. Then, we
train on the second session and test on the first session. We compare our Bayesian estimator (via
Gibbs SBA) and MLE (via PCD) by the log pseudo-likelihood of testing data since exact likelihood
is intractable. We set the number of Gibbs sampling steps to be 3,000. Both of the two experiments are finished in around three hours on a single CPU. The results are summarized in Table 2.
In the first experiment, the log pseudo-likelihood of test data is ?9022.01 from MLE, whereas it
is ?8989.87 from our Bayesian estimate. In the second experiment, the log pseudo-likelihood of
test data is ?11490.47 from MLE, whereas it is ?11446.45 from our Bayesian estimate. The increase of log pseudo-likelihood is comparable to the increase of log (pseudo-)likelihood we gain in
the simulations (please refer to Figures 1b, 4b and 5b at the points when we simulate 200 and 300
training samples). Both experiments indicate that the models trained with the Gibbs SBA algorithm
generalize considerably better than the models trained with MLE. Gibbs SBA also infers there are
around eight different types of relations among the senators. The two trained models are provided
in the supplementary materials, and the estimated parameters in the two models are consistent.
6
Discussion
Bayesian nonparametric approaches [23, 10], such as the Dirichlet process [7], provide an elegant
way of modeling mixtures with an unknown number of components. These approaches have yielded
advances in different machine learning areas, such as the infinite Gaussian mixture models [26], the
infinite mixture of Gaussian processes [27], infinite HMMs [3, 8], infinite HMRFs [6], DP-nonlinear
models [28], DP-mixture GLMs [14], infinite SVMs [33, 32], and the infinite latent attribute models
[24]. In this paper, we play the same trick of replacing the prior distribution with a prior stochastic process to accommodate our uncertainty about the number of parameter groups. To the best of
our knowledge, this is the first time a Bayesian nonparametric approach is applied to models whose
likelihood is intractable. Accordingly, we propose two types of approximation, namely a MetropolisHastings algorithm with auxiliary variables and a Gibbs sampling algorithm with stripped Beta approximation. Both algorithms show superior performance over conventional MLE, and Gibbs SBA
can also scale well to large-scale MRFs. The Markov chains in both algorithms are ergodic, but
may not be in detailed balance because we rely on approximation. Thus, we guarantee that both
algorithms converge for general MRFs, but they may not exactly converge to the target distribution.
In this paper, we only consider the situation where the potential functions are pairwise and there is
only one parameter in each potential function. For graphical models with more than one parameter
in the potential functions, it is appropriate to group the parameters on the level of potential functions.
A more sophisticated base distribution G0 (such as some multivariate distribution) needs to be considered. In this paper, we also assume the structures of the MRFs are given. When the structures are
unknown, we still need to perform structure learning. Allowing structure learners to automatically
identify structure modules will be another very interesting topic to explore in the future research.
Acknowledgements
The authors acknowledge the support of NIGMS R01GM097618-01 and NLM R01LM011028-01.
8
References
[1] A. U. Asuncion, Q. Liu, A. T. Ihler, and P. Smyth. Particle filtered MCMC-MLE with connections to
contrastive divergence. In ICML, 2010.
[2] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate Gaussian or binary data. JMLR, 9:485?516, June 2008.
[3] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In NIPS, 2002.
[4] J. Besag. Statistical analysis of non-lattice data. JRSS-D, 24(3):179?195, 1975.
[5] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[6] S. P. Chatzis and G. Tsechpenakis. The infinite hidden Markov random field model. In ICCV, 2009.
[7] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209?
230, 1973.
[8] J. V. Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov
model. In ICML, 2008.
[9] A. Gelman and J. Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge
University Press, New York, 2007.
[10] S. J. Gershman and D. M. Blei. A tutorial on Bayesian nonparametric models. Journal of Mathematical
Psychology, 56(1):1?12, 2012.
[11] C. J. Geyer. Markov chain Monte Carlo maximum likelihood. Computing Science and Statistics, pages
156?163, 1991.
[12] M. Gutmann and J. Hirayama. Bregman divergence as general framework to estimate unnormalized
statistical models. In UAI, pages 283?290, Corvallis, Oregon, 2011. AUAI Press.
[13] M. Gutmann and A. Hyv?arinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, 2010.
[14] L. A. Hannah, D. M. Blei, and W. B. Powell. Dirichlet process mixtures of generalized linear models.
JMLR, 12:1923?1953, 2011.
[15] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14:1771?1800, 2002.
[16] A. Hyvarinen. Connections between score matching, contrastive divergence, and pseudolikelihood for
continuous-valued variables. Neural Networks, IEEE Transactions on, 18(5):1529?1531, 2007.
[17] A. Hyv?arinen. Some extensions of score matching. Computational statistics & data analysis, 51(5):2499?
2512, 2007.
[18] S. Lyu. Unifying non-maximum likelihood learning objectives with minimum KL contraction. NIPS,
2011.
[19] M. Meila. Comparing clusterings by the variation of information. In COLT, 2003.
[20] J. M?ller, A. Pettitt, R. Reeves, and K. Berthelsen. An efficient Markov chain Monte Carlo method for
distributions with intractable normalising constants. Biometrika, 93(2):451?458, 2006.
[21] I. Murray, Z. Ghahramani, and D. J. C. MacKay. MCMC for doubly-intractable distributions. In UAI,
2006.
[22] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249?265, 2000.
[23] P. Orbanz and Y. W. Teh. Bayesian nonparametric models. In Encyclopedia of Machine Learning.
Springer, 2010.
[24] K. Palla, D. A. Knowles, and Z. Ghahramani. An infinite latent attribute model for network data. In
ICML, 2012.
[25] J. G. Propp and D. B. Wilson. Exact sampling with coupled Markov chains and applications to statistical
mechanics. Random structures and Algorithms, 9(1-2):223?252, 1996.
[26] C. E. Rasmussen. The infinite Gaussian mixture model. In NIPS, 2000.
[27] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In NIPS, 2001.
[28] B. Shahbaba and R. Neal. Nonlinear models using Dirichlet process mixtures. JMLR, 10:1829?1850,
2009.
[29] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In
ICML, 2008.
[30] T. Tieleman and G. Hinton. Using fast weights to improve persistent contrastive divergence. In ICML,
2009.
[31] D. Vickrey, C. Lin, and D. Koller. Non-local contrastive objectives. In Proc. of the International Conference on Machine Learning. Citeseer, 2010.
[32] J. Zhu, N. Chen, and E. P. Xing. Infinite latent SVM for classification and multi-task learning. In NIPS,
2011.
[33] J. Zhu, N. Chen, and E. P. Xing. Infinite SVM: a Dirichlet process mixture of large-margin kernel machines. In ICML, 2011.
[34] S. C. Zhu and X. Liu. Learning in Gibbsian fields: How accurate and how fast can it be? IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:1001?1006, 2002.
9
| 4961 |@word sba:35 replicate:3 unif:5 hyv:2 simulation:8 bn:2 contraction:2 covariance:1 contrastive:9 citeseer:1 accommodate:2 initial:2 liu:3 score:2 existing:2 current:7 comparing:1 assigning:2 partition:1 informative:2 remove:6 update:11 resampling:1 intelligence:1 accordingly:1 menendez:1 geyer:1 record:1 filtered:2 blei:3 normalising:1 node:9 simpler:1 five:3 height:1 rc:3 mathematical:1 beta:16 become:2 ik:2 persistent:2 consists:1 doubly:1 introduce:3 pairwise:4 frequently:2 examine:1 mechanic:1 multi:1 palla:1 automatically:2 gov:1 cpu:1 considering:1 totally:1 provided:4 notation:1 maximizes:1 mass:1 lowest:1 kind:1 prohibits:1 proposing:2 unified:1 guarantee:1 pseudo:10 every:3 voting:5 concave:1 auai:1 tie:1 exactly:2 biometrika:1 yn:3 overestimate:1 dropped:1 local:1 treat:1 xv:5 congress:2 propp:1 approximately:1 initialization:1 resembles:3 collect:1 specifying:1 co:1 ease:1 latently:1 hmms:1 averaged:3 unique:2 testing:8 auxvar:10 procedure:2 powell:1 area:1 matching:3 get:5 cannot:2 close:4 selection:1 gelman:1 www:1 conventional:2 missing:1 starting:1 ergodic:3 simplicity:1 insight:1 estimator:37 regarded:1 variation:4 analogous:1 annals:1 target:3 suppose:6 play:1 exact:13 smyth:1 trick:1 element:6 expensive:2 updating:2 gorithm:1 observed:4 module:1 dg0:4 connected:2 gutmann:2 highest:1 complexity:1 ideally:1 trained:3 rewrite:2 ror:1 compromise:1 technically:1 learner:2 mh:24 train:2 distinct:1 fast:5 monte:5 whose:3 larger:1 supplementary:2 valued:1 drawing:4 statistic:5 think:1 beal:1 propose:7 ment:1 product:2 lpl:3 generating:4 perfect:3 converges:1 staying:1 depending:1 friend:1 hirayama:1 auxiliary:14 c:2 involves:2 treewidth:1 indicate:2 attribute:2 stochastic:1 nlm:1 material:2 proviso:1 ibbs:4 require:1 arinen:2 multilevel:1 assign:2 fix:2 biological:2 extension:1 around:2 considered:2 ground:3 normal:1 equilibrium:1 algorithmic:1 lyu:1 estimation:11 proc:1 sensitive:1 grouped:1 gaussian:6 xact:1 pn:2 cr:1 wilson:1 june:1 likelihood:47 indicates:6 mainly:1 contrast:1 besag:1 baseline:1 inference:3 mrfs:17 abstraction:3 el:1 ferguson:1 accept:2 hidden:3 relation:5 koller:1 among:6 colt:1 classification:1 denoted:1 mackay:1 field:3 aware:4 ng:1 sampling:27 atom:1 cancel:2 icml:6 future:1 few:1 randomly:3 ime:1 divergence:8 senator:6 saatci:1 replaced:2 connects:1 message:1 investigate:3 mixture:10 chain:19 gibbsian:1 accurate:1 bregman:1 edge:10 integral:1 worker:1 partial:1 tree:7 subfigure:3 mk:1 column:1 modeling:3 lattice:1 contested:1 subset:1 hundred:1 considerably:1 person:1 density:2 international:1 stay:1 together:2 again:1 choose:4 worse:3 expert:2 derivative:1 potential:9 summarized:1 oregon:1 depends:1 blind:8 vi:17 shahbaba:1 closed:1 linked:1 xing:2 recover:2 asuncion:1 contribution:2 who:1 identify:3 lesson:1 yes:1 generalize:4 bayesian:48 eters:1 carlo:5 biostat:1 served:2 j6:1 naturally:1 mi:1 recovers:3 ihler:1 gain:1 knowledge:1 dimensionality:1 infers:1 cj:2 sophisticated:1 higher:4 done:2 glms:1 hastings:9 replacing:1 nonlinear:2 banerjee:1 true:2 adequately:1 assigned:1 neal:2 vickrey:1 during:3 please:1 unnormalized:3 generalized:1 hill:1 performs:3 berthelsen:1 superior:1 conditioning:1 thirdly:1 m1:1 refer:1 corvallis:1 cambridge:1 gibbs:62 reef:1 uv:14 grid:11 meila:1 similarly:1 pointed:1 particle:8 session:12 base:4 add:3 posterior:20 multivariate:2 orbanz:1 belongs:1 binary:5 minimum:2 impose:3 converge:2 ller:2 ii:2 xju:11 infer:1 calculation:4 long:1 lin:1 mle:48 coded:1 mrf:19 regression:1 expectation:1 metric:1 iteration:3 kernel:1 c1:1 proposal:9 addition:1 want:4 hurdle:1 whereas:2 beam:1 rest:2 unlike:1 ascent:1 elegant:1 undirected:4 member:1 jordan:1 iii:1 iterate:1 affect:1 xj:13 variate:1 finish:1 psychology:1 intensive:1 supporter:1 whether:2 york:1 repeatedly:1 jie:1 ignored:1 gael:1 detailed:1 listed:1 nonparametric:5 encyclopedia:1 ten:1 svms:1 generate:8 outperform:2 exist:1 tutorial:1 estimated:1 discrete:2 group:49 demonstrating:2 drawn:2 wisc:2 boxplots:1 graph:2 year:1 run:13 prob:1 parameterized:4 uncertainty:3 named:1 place:3 family:2 reasonable:1 knowles:1 draw:13 decision:1 comparable:1 resampled:1 oracle:17 yielded:7 occur:2 pcd:15 x2:2 simulate:1 min:5 performing:1 according:8 jr:1 beneficial:2 slightly:2 ninety:1 wi:2 metropolis:9 restricted:1 pr:1 gradually:1 ghaoui:1 iccv:1 ln:3 computationally:1 turn:2 eventually:4 know:2 tractable:5 end:6 available:1 apply:5 eight:1 hierarchical:2 appropriate:1 occurrence:1 denotes:2 dirichlet:10 clustering:2 running:4 rain:1 graphical:5 madison:2 unifying:1 calculating:5 ghahramani:5 especially:1 murray:1 classical:2 move:6 g0:10 question:1 quantity:5 already:1 objective:2 concentration:1 gradient:2 dp:4 accommodates:1 topic:2 evaluate:7 extent:2 ratio:11 balance:1 minimizing:1 mostly:1 potentially:1 rise:2 proper:1 boltzmann:1 unknown:5 perform:6 allowing:1 teh:2 observation:1 markov:23 acknowledge:1 situation:3 extended:6 hinton:2 inferred:13 david:1 introduced:2 namely:12 specified:3 kl:2 connection:2 learned:5 hour:1 nip:5 usually:3 pattern:4 including:1 reliable:2 metropolishastings:3 rely:1 senate:5 zhu:3 pettitt:1 improve:1 finished:1 aspremont:1 coupled:1 prior:9 review:1 acknowledgement:1 asymptotic:1 wisconsin:2 expect:2 interesting:1 proportional:1 allocation:1 var:1 gershman:1 integrate:1 sufficient:1 xp:2 consistent:2 principle:1 bypass:1 row:1 last:1 rasmussen:3 pseudolikelihood:2 telling:1 stripped:12 absolute:1 sparse:1 regard:1 dimension:2 xn:5 world:4 calculated:1 forward:2 collection:1 author:1 simplified:1 social:4 hyvarinen:1 transaction:2 approximate:2 countably:1 clique:2 global:1 uai:2 conclude:1 continuous:1 latent:9 table:4 ignoring:2 init:2 xjv:11 domain:1 aistats:1 bmi:3 noise:1 arise:1 repeated:1 xu:5 x1:5 paramn:1 fashion:1 exponential:3 candidate:1 tied:2 jmlr:4 hannah:1 formula:1 minute:1 specific:1 er:1 svm:2 grouping:38 intractable:15 burden:2 ci:45 margin:1 chen:2 nigms:1 likely:1 explore:1 springer:1 truth:3 tieleman:2 conditional:3 kmeans:1 replace:1 feasible:1 specifically:1 infinite:15 except:3 total:4 blessing:2 vote:10 est:1 people:3 support:1 dept:2 mcmc:5 aux:1 ex:1 |
4,378 | 4,962 | On Sampling from the Gibbs Distribution with
Random Maximum A-Posteriori Perturbations
Tamir Hazan
University of Haifa
Subhransu Maji
TTI Chicago
Tommi Jaakkola
CSAIL, MIT
Abstract
In this paper we describe how MAP inference can be used to sample efficiently
from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs? distributions by introducing low dimensional perturbations and solving the corresponding MAP assignments. Our
approach also leads to new ways to derive lower bounds on partition functions.
We demonstrate empirically that our method excels in the typical ?high signal high coupling? regime. The setting results in ragged energy landscapes that are
challenging for alternative approaches to sampling and/or lower bounds.
1
Introduction
Inference in complex models drives much of the research in machine learning applications, from
computer vision, natural language processing, to computational biology. Examples include scene
understanding, parsing, or protein design. The inference problem in such cases involves finding
likely structures, whether objects, parsers, or molecular arrangements. Each structure corresponds
to an assignment of values to random variables and the likelihood of an assignment is based on
defining potential functions in a Gibbs distribution. Usually, it is feasible to find only the most
likely or maximum a-posteriori (MAP) assignment (structure) rather than sampling from the full
Gibbs distribution. Substantial effort has gone into developing algorithms for recovering MAP assignments, either based on specific structural restrictions such as super-modularity [2] or by devising
cutting-planes based methods on linear programming relaxations [19, 24]. However, MAP inference
is limited when there are other likely assignments.
Our work seeks to leverage MAP inference so as to sample efficiently from the full Gibbs distribution. Specifically, we aim to draw either approximate or unbiased samples from Gibbs distributions
by introducing low dimensional perturbations in the potential functions and solving the corresponding MAP assignments. Connections between random MAP perturbations and Gibbs distributions
have been explored before. Recently [17, 21] defined probability models that are based on low
dimensional perturbations, and empirically tied them to Gibbs distributions. [5] augmented these
results by providing bounds on the partition function in terms of random MAP perturbations.
In this work we build on these results to construct an efficient sampler for the Gibbs distribution, also
deriving new lower bounds on the partition function. Our approach excels in regimes where there
are several but not exponentially many prominent assignments. In such ragged energy landscapes
classical methods for the Gibbs distribution such as Gibbs sampling and Markov chain Monte Carlo
methods, remain computationally expensive [3, 25].
2
Background
Statistical inference problems involve reasoning about the states of discrete variables whose configurations (assignments of values) specify the discrete structures of interest. We assume that the
1
models are parameterized by real valued potentials ?(x) = ?(x1 , ..., xn ) < ? defined over a discrete product space X = X1 ? ? ? ? ? Xn . The effective domain is implicitly defined through ?(x)
via exclusions ?(x) = ?? whenever x 6? dom(?). The real valued potential functions are mapped
to the probability scale via the Gibbs? distribution:
X
1
p(x1 , ..., xn ) =
exp(?(x1 , ..., xn )), where Z =
exp(?(x1 , ..., xn )).
(1)
Z
x ,...,x
1
n
The normalization constant Z is called the partition function. The feasibility of using the distribution
for prediction, including sampling from it, is inherently tied to the ability to evaluate the partition
function, i.e., the ability to sum over the discrete structures being modeled. In general, such counting
problems are often hard, in #P.
A slightly easier problem is that of finding the most likely assignment of values to variables, also
known as the maximum a-posterior (MAP) prediction.
(MAP)
arg max
x1 ,...,yn
?(x1 , ..., xn )
(2)
Recent advances in optimization theory have been translated to successful algorithms for solving
such MAP problems in many cases of practical interest. Although the MAP prediction problem is
still NP-hard in general, it is often simpler than sampling from the Gibbs distribution.
Our approach is based on representations of the Gibbs distribution and the partition function using
extreme value statistics of linearly perturbed potential functions. Let {?(x)}x?X be a collection of
random variables with zero mean, and consider random potential functions of the form ?(x) + ?(x).
Analytic expressions for the statistics of a randomized MAP predictor, x
? ? argmaxx {?(x) + ?(x)},
can be derived for general discrete sets, whenever independent and identically distributed (i.i.d.)
random perturbations are applied for every assignment x ? X. Specifically, when the random
perturbations follow the Gumbel distribution (cf. [12]), we obtain the following result.
Theorem 1. ([4], see also [17, 5]) Let {?(x)}x?X be a collection of i.i.d. random variables,
each following the Gumbel distribution with zero mean, whose cumulative distribution function is
F (t) = exp(? exp(?(t + c))), where c is the Euler constant. Then
h
i
log Z = E? max{?(x) + ?(x)} .
x?X
h
i
1
exp(?(?
x)) = P? x
? ? arg max{?(x) + ?(x)} .
x?X
Z
The max-stability of the Gumbel distribution provides a straight forward approach to generate unbiased samples from the Gibbs distribution as well as to approximate the partition function by a
sample mean of random MAP perturbation. Assume we sample j = 1, ..., m independent predictions maxx {?(x) + ?j (x)}, then every maximal argument is an unbiased sample from the Gibbs
distribution. Moreover, the randomized MAP predictions maxx {?(x) + ?j (x)} are independent and
follow the Gumbel distribution, whose variance is ? 2 /6. Therefore Chebyshev?s inequality dictates,
for every t, m
m
h 1 X
i
?
max{?(x) + ?j (x)} ? log Z ? ?
(3)
P r?
m j=1 x
6m2
In general each x = (x1 , ..., xn ) represents an assignment to n variables. Theorem 1 suggests to
introduce an independent perturbation ?(x) for each such n?dimensional assignment x ? X. The
complexity of inference and learning in this setting would be exponential in n. In our work we
propose to investigate low dimensional random perturbations as the main tool to efficiently (approximate) sampling from the Gibbs distribution.
3
Probable approximate samples from the Gibbs distribution
Sampling from the Gibbs distribution is inherently tied to estimating the partition function. Markov
properties that simplify the distribution also decompose the computation of the partition function.
2
For example, assume a graphical model
P with potential functions associated with subsets of variables
? ? {1, ..., n} so that ?(x) =
??A ?? (x? ). Assume that the subsets are disjoint except for
their common intersection ? = ???A . This separation implies that the partition function can be
computed in lower dimensional pieces
XY X
Z=
exp(?? (x? ))
x? ??A
x? \x?
As a result, the computation is exponential only in the size of the subsets ? ? A. Thus,
we can also estimate the partition function with lower dimensional random MAP perturbations,
E? [maxx? \x? {?? (x? ) + ?? (x? )}]. The random perturbation are now required only for each assignment of values to the variables within the subsets ? ? A rather than the set of all variables.
We approximate such partition functions with low dimensional perturbations and their averages. The
overall computation is cast in a single MAP problem using an extended representation of potential
functions by replicating variables.
Lemma 1. Let A be subsets of variables that are separated by their joint intersection ? = ???A ?.
We create multiple copies of x? , namely x
?? = (x?,j? )j? =1,...,m? , and define the extended potenPm?
tial function ??? (?
x? ) =
?
(x
)/m? . We also define the extended perturbation model
?
?,j
?
j? =1
Pm
??? (?
x? ) = j??=1 ??,j? (x?,j? )/m? , where each ??,j? (x?,j? ) is independent and distributed according to the Gumbel distribution with zero mean. Then, for every x? , with probability at least
P
2
1 ? ??A 6m?? 2
X
X
X
X
??? (?
x? ) +
??? (?
x? ) ?
log
exp(?? (x? )) ? |A|
max
x
?\x?
??A
??A
??A
x? \x?
Proof: Equation (3) implies that for every x? with probability at most ? 2 /6m? 2 holds
m?
1 X
max {?? (x? ) + ??,j? (x? )} ? log
m? j =1 x? \x?
?
X
exp(?? (x? )) ? .
x? \x?
To compute the sampled average with a single
Pm? max-operation we introduce the multiple copies x
?? = (x?,j? )j? =1,...,m? thus
=
j? =1 maxx? \x? {?? (x? ) + ??,j? (x? )}
Pm
maxx?,j? \x? j=1 {?? (x?,j? ) + ??,j? (x?,j? )}. By the union bound it holds for every ? ? A
P
simultaneously with probability at least 1 ? ??A ? 2 /6m? 2 . Since x? is fixed for every ? ? A
the maximizations are done independently across subsets in x
? \ x? , where x
? is the concatenation of
all x
?? , and
m? n
m? n X
o
o
X
X
X
X
max
?? (x?,j? ) + ??,j? (x?,j? ) = max
?? (x?,j? ) +
??,j? (x?,j? ) .
??A
x
?? \x?
x
?\x?
j? =1
j? =1
??A
??A
The proof then follows from the triangle inequality.
Whenever the graphical model has no cycles we can iteratively apply the separation properties without increasing the computational complexity of perturbations. Thus we may randomly perturb the
subsets of potentials in the graph. For notational simplicity we describe our approximate sampling
scheme for pairwise interactions ? = (i, j) although it holds for general graphical models without
cycles:
P
P
Theorem 2. Let ?(x) =
i?V ?i (xi ) +
i,j?E ?i,j (xi , xj ) be a graphical model with?
out
cycles,
and
let
p(x)
be
the
Gibbs
distribution
defined
Equation (1). Let ?(x)
=
Pmi
Q
Pmiin
,mj
?(x
,
...,
x
)/
m
,
and
?
?
(x
,
x
)
=
?
(x
,
x
)/m
m
1,k
n,k
i
i,j
i
j
i,j,k
,k
i,k
j,k
i
j
1
n
i j
i
j
ki =1
i
ki ,kj =1
where each perturbation is independent and distributed according to the Gumbel distribution with
zero mean. Then, for every edge (r,P
s) while mr = ms = 1 (i.e., they have no multiple copies) there
n
holds with probability at least 1 ? i=1 ? 2 c/6mi 2 , where c = maxi |Xi |
h
n
oi
X
X
? +
p(x) ? n
??i,j (xi , xj )
? log
log P? xr , xs ? arg max ?(x)
x
?
i,j?E
x\xr ,xs
3
Proof: Theorem 1 implies that we sample (xr , xs ) approximately
from the Gibbs distribution
P
marginal probabilities with a max-operation, if we approximate x\{xr ,xs } exp(?(x)). Using graph
separation (or equivalently the Markov property) it suffices to approximate the partial partition function over the disjoint subtrees Tr , Ts that originate from r, s respectively. Lemma 1 describes this
case for a directed tree with a single parent. We use this by induction on the parents on these directed
trees, noticing that graph separation guarantees: the statistics of Lemma 1 hold uniformly for every
assignment of the parent?s non-descendants as well; the optimal assignments in Lemma 1 are chosen
independently for every child for every assignment of the parent?s non-descendants label.
Our approximated sampling procedure expands the graphical model, creating layers of the original
graph, while connecting edges between vertices in the different layers if an edge exists in the original
graph. We use graph separations (Markov properties) to guarantee that the number of added layers
is polynomial in n, while we approach arbitrarily close to the Gibbs distribution. This construction
preserves the structure of the original graph, in particular, whenever the original graph has no cycles,
the expanded graph does not have cycles as well. In the experiments we show that this probability
model approximates well the Gibbs distribution for graphical models with many cycles.
4
Unbiased sampling using sequential bounds on the partition function
In the following we describe how to use random MAP perturbations to generate unbiased samples
from the Gibbs distribution. Sampling from the Gibbs distribution is inherently tied to estimating the
partition function. Assume we could have compute the partition function exactly, then we could have
sample from the Gibbs distribution
sequentially: for every dimension we sample xi with probabilP
ity which is proportional to xi+1 ,...,xn exp(?(x)). Unfortunately, approximations to the partition
function, as described in Section 3, cannot provide a sequential procedure that would generate unbiased samples from the full Gibbs distribution. Instead, we construct a family of self-reducible
upper bounds which imitate the partition function behavior, namely bound the summation over its
exponentiations. These upper bounds extend the one in [5] when restricted to local perturbations.
Lemma 2. Let {?i (xi )} be a collection of i.i.d. random variables, each following the Gumbel
distribution with zero mean. Then for every j = 1, ..., n and every x1 , ..., xj?1 holds
n
n
h
i
h
i
X
X
X
exp E?
max {?(x) +
?i (xi )} ? exp E? max {?(x) +
?i (xi )}
xj
xj+1 ,...,xn
xj ,...,xn
i=j+1
In particular, for j = n holds
i=j
h
i
.
xn exp(?(x)) = exp E?n (xn ) maxxj ,...,xn {?(x) + ?n (xn )}
P
Proof: The result is an application of the expectation-optimization
interpretation
of the partition
function in Theorem
1.
The
left
hand
side
equals
to
E
max
E
max
{?(x) +
?
x
?
,...,?
x
j
j
j+1
n
j+1 ,...,xn
Pn
?
(x
)
,
while
the
right
hand
side
is
attained
by
alternating
the
maximization
with
respect
i=j i i
to xj with the expectation of ?j+1 , ..., ?n . The proof then follows by taking the exponent.
We use these upper bounds for every dimension i = 1, ..., n to sample from a probability distribution
that follows a summation over exponential functions, with a discrepancy that is described by the
upper bound. This is formalized below in Algorithm 1
Algorithm 1 Unbiased sampling from Gibbs distribution using randomized prediction
Iterate over j = 1, ..., n, while keeping fixed x1 , ..., xj?1 . Set
P
exp E? maxxj+1 ,...,xn {?(x)+ n
i=j+1 ?i (xi )}
1. pj (xj ) =
.
Pn
exp E? maxxj ,...,xn {?(x)+
2. pj (r) = 1 ?
P
xj
i=j
?i (xi )}
p(xj )
3. Sample an element according to pj (?). If r is sampled then reject and restart with j = 1.
Otherwise, fix the sampled element xj and continue the iterations.
Output: x1 , ..., xn
When we reject the discrepancy, the probability we accept a configuration x is the product of probabilities in all rounds. Since these upper bounds are self-reducible, i.e., for every dimension i we
4
are using the same quantities that were computed in the previous dimensions 1, ..., i ? 1, we are
sampling an accepted configuration proportionally to exp(?(x)), the full Gibbs distribution.
Theorem 3. Let p(x) be the Gibbs distribution, defined in Equation (1) and let {?i (xi )} be a collection of i.i.d. random variables following the Gumbel distribution with zero mean. Then whenever
Algorithm 1 accepts, it produces a configuration (x1 , ..., xn ) according to the Gibbs distribution
h
i
P Algorithm 1 outputs x Algorithm 1 accepts = p(x).
Proof: The probability of sampling a configuration (x1 , ..., xn ) without rejecting is
Pn
max {?(x) + i=j+1 ?i (xi )}
n exp E?
Y
xj+1 ,...,xn
exp(?(x))
=
.
Pn
Pn
exp E? max {?(x) + i=j ?i (xi )}
exp E? max {?(x) + i=1 ?i (xi )}
j=1
xj ,...,xn
x1 ,...,xn
The probability
of sampling without
is thus
over
all configura
rejecting
the sum of this probability
Pn
tion, i.e., P Algorithm 1 accepts = Z exp E? maxx1 ,...,xn {?(x) + i=1 ?i (xi )} . Therefore
conditioned on accepting a configuration, it is produced according to the Gibbs distribution. .
Acceptance/rejection follows the geometric distribution, therefore the sampling procedure rejects
k times with probability (1 ? P [Algorithm 1 accepts])k . The running time of our Gibbs sampler
is determined by the average number of rejections 1/P [Algorithm 1 accepts]. Interestingly, this
average is the quality of the partition upper bound presented in [5]. To augment this result we
investigate in the next section efficiently computable lower bounds to the partition function, that are
based on random MAP perturbations. These lower bounds provide a way to efficiently determine the
computational complexity for sampling from the Gibbs distribution for a given potential function.
5
Lower bounds on the partition function
The realization of the partition function as expectation-optimization pair in Theorem 1 provides
efficiently computable lower bounds on the partition function. Intuitively, these bounds correspond
to moving expectations (or summations) inside the maximization operations. In the following we
present two lower bounds that are derived along these lines, the first holds in expectation and the
second holds in probability.
Corollary 1. Consider a family of subsets ? ? A and let x? be a set of variables {xi }i?? restricted
to the indexes in ?. Assume that the random variables ?? (x? ) are i.i.d. according to the Gumbel
distribution with zero mean, for every ?, x? . Then
h
i
?? ? A
log Z ? E? max ?(x) + ?? (x? ) .
x
h
In particular, log Z ? E? maxx ?(x) +
1
|A|
P
??A
?? (x? )
i
.
P P
P
Proof: Let ?
? = {1, ..., n} \ ? then Z = x? x?? exp(?(x)) ? x? maxx?? exp(?(x)). The first
result is derived by swapping the maximization with the exponent, and
Theorem 1. The
Papplying
1
E? [maxx {?(x) +
second result is attained while averaging these lower bounds log Z ? ??A |A|
?? (x? )}], and by moving the summation inside the maximization operation.
The expected lower bound requires to invoke a MAP solver multiple times. Although this expectation may be
? estimated with a single MAP execution, the variance of this random MAP prediction
is around n. We suggest to recursively use Lemma 1 to lower bound the partition function with a
single MAP operation in probability.
Corollary 2. Let ?(x) be a potential function over x = (x1 , ..., xn ). We create multiple copies of xi , namely xi,ki for ki = 1, ..., mi , and define the extended potential function
Pmi
Q
?
?(x)
=
mi . We define the extended perturbation model ??i (xi ) =
ki =1 ?(x1,k1 , ..., xn,kn )/
Pmi
?
(x
)/m
where
each
perturbation
is independent and distributed according to the
i,ki
i
ki =1 i,ki
Pn
Gumbel distribution with zero mean. Then, with probability at least 1 ? i=1 ? 2 |dom(?)|/6mi 2
P
? + n ??i (xi )} ? n
holds log Z ? maxx? {?(x)
i=1
5
lower bounds
unbiased samplesr complexity
approximate sampler
Figure 1: Left: comparing our expected lower and probable lower bounds with structured mean-field and
belief propagation on attractive models with high signal and varying coupling strength. Middle: estimating
our unbiased sampling procedure complexity on spin glass models of varying sizes. Right: Comparing our
approximate sampling procedure on attractive models with high signal.
Proof: We estimate the expectation-optimization value of the log-partition function iteratively for
every dimension, while replacing each expectation with its sampled average, as described in Lemma
1. Our result holds for every potential function, thus the statistics in each recursion hold uniformly
for every x with probability at least 1 ? ? 2 |dom(?)|/6mi 2 . We then move the averages inside the
maximization operation, thus lower bounding the n?approximation of the partition function.
The probable lower bound that we provide does not assume graph separations thus the statistical
guarantees are worse than the ones presented in the approximation scheme of Theorem 2. Also,
since we are seeking for lower bound, we are able relax our optimization requirements and thus to
use vertex based random perturbations ?i (xi ). This is an important difference that makes this lower
bound widely applicable and very efficient.
6
Experiments
P
P
We evaluated our approach on spin glass models ?(x) = i?V ?i xi + (i,j)?E ?i,j xi xj . where
xi ? {?1, 1}. Each spin has a local field parameter ?i , sampled uniformly from [?1, 1]. The
spins interact in a grid shaped graphical model with couplings ?i,j , sampled uniformly from [0, c].
Whenever the coupling parameters are positive the model is called attractive as adjacent variables
give higher values to positively correlated configurations. Attractive models are computationally
appealing as their MAP predictions can be computed efficiently by the graph-cut algorithm [2].
We begin by evaluating our lower bounds, presented in Section 5, on 10 ? 10 spin glass models.
Corollary 1 presents a lower bound that holds in expectation. We evaluated these lower bounds
while perturbing the local potentials with ?i (xi ). Corollary 2 presents a lower bound that holds
in probability and requires only a single MAP prediction on an expanded model. We evaluate the
probable bound by expanding the model to 1000 ? 1000 grids, ignoring the discrepancy . For both
the expected lower bound and the probable lower bound we used graph-cuts to compute the random
MAP perturbations. We compared these bounds to the different forms of structured mean-field, taking the one that performed best: standard structured mean-field that we computed over the vertical
chains [8, 1], and the negative tree re-weighted computed on the horizontal and vertical trees [14].
We also compared to the sum-product belief propagation algorithm, which was recently proven to
produce lower bounds for attractive models [20, 18]. We computed the error in estimating the logarithm of the partition function, averaged over 10 spin glass models, see Figure 1. One can see that
the probable bound is the tightest when considering the medium and high coupling domain, which
is traditionally hard for all methods. As it holds in probability it might generate a solution which is
not a lower bound. One can also verify that on average this does not happen. The expected lower
bound is significantly worse for the low coupling regime, in which many configurations need to be
taken into account. It is (surprisingly) effective for the high coupling regime, which is characterized
by a few dominant configurations.
Section 4 describes an algorithm that generates unbiased samples from the full Gibbs distribution.
Focusing on spin glass models with strong local field potentials, it is well know that one cannot
produce unbiased samples from the Gibbs distributions in polynomial time [3]. Theorem 3 connects
6
Image + annotation
MAP solution
Average of 20 samples
Error estimates
Figure 2: Example image with the boundary annotation (left) and the error estimates obtained using our
method (right). Thin structures of the object are often lost in a single MAP solution (middle-left), which are
recovered by averaging the samples (middle-right) leading to better error estimates.
the computational complexity of our unbiased sampling procedure to the gap between the logarithm
of the partition function and its upper bound in [5]. We use our probable lower bound to estimate this
gap on large grids, for which we cannot compute the partition function exactly. Figure 1 suggests
that the running time for this sampling procedure is sub-exponential.
Sampling from the Gibbs distribution in spin glass models with non-zero local field potentials is
computationally hard [7, 3]. The approximate sampling technique in Theorem 3 suggests a method
to overcome this difficulty by efficiently sampling from a distribution that approximates the Gibbs
distribution on its marginal probabilities. Although our theory is only stated for graphs without
cycles, it can be readily applied to general graphs, in the same way the (loopy) belief propagation algorithm is applied. For computational reasons we did not expand the graph. Also, we experiment both with pairwise perturbations, as Theorem 2 suggests, and with local perturbations,
which are guaranteed to preserve the potential function super-modularity. We computed the local
marginal probability errors of our sampling procedure, while comparing to the standard methods
of Gibbs sampling, Metropolis and Swendsen-Wang1 . In our experiments we let them run for at
most 1e8 iterations, see Figure 1. Both Gibbs sampling and the Metropolis algorithm perform similarly (we omit the Gibbs sampler performance for clarity). Although these algorithms as well as
the Swendsen-Wang algorithm directly sample from the Gibbs distribution, they typically require
exponential running time to succeed on spin glass models. Figure 1 shows that these samplers are
worse than our approximate samplers. Although we omit from the plots for clarity, our approximate
sampling marginal probabilities compare to those of the sum-product belief propagation and the tree
re-weighted belief propagation [22]. Nevertheless, our sampling scheme also provides a probability
notion, which lacks in the belief propagation type algorithms. Surprisingly, the approximate sampler
that uses pairwise perturbations performs (slightly) worse than the approximate sampler that only
use local perturbations. Although this is not explained by our current theory, it is an encouraging
observation, since approximate sampler that uses random MAP predictions with local perturbations
is orders of magnitude faster.
Lastly, we emphasize the importance of probabilistic reasoning over the current variational methods,
such as tree re-weighted belief propagation [22] or max-marginal probabilities [10], that only generate probabilities over small subsets of variables. The task we consider is to obtain pixel accurate
boundaries from rough boundaries provided by the user. For example in an image editing application
the user may provide an input in the form of a rough polygon and the goal is to refine the boundaries
using the information from the gradients in the image. A natural notion of error is the average deviation of the marked boundary from the true boundary of the image. Given a user boundary we set
up a graphical model on the pixels using foreground/background models trained from regions well
inside/outside the marked boundary. Exact binary labeling can be obtained using the graph-cuts algorithm. From this we can compute the expected error by sampling multiple solutions using random
MAP predictors and averaging. On a dataset of 10 images which we carefully annotated to obtain
pixel accurate boundaries we find that random MAP perturbations produce significantly more accurate estimates of boundary error compared to a single MAP solution. On average the error estimates
obtained using random MAP perturbations is off by 1.04 pixels from the true error (obtained from
ground truth) whereas the MAP which is off by 3.51 pixels. Such a measure can be used in an active
annotation framework where the users can iteratively fix parts of the boundary that contain errors.
1
We used Talya Meltzer?s inference package.
7
Figure 2 shows an example annotation, the MAP solution, the mean of 20 random MAP solutions,
and boundary error estimates.
7
Related work
The Gibbs distribution plays a key role in many areas of science, including computer science, statistics and physics. To learn more about its roles in machine learning, as well as its standard samplers,
we refer the interested reader to the textbook [11]. Our work is based on max-statistics of collections
of random variables. For comprehensive introduction to extreme value statistics we refer the reader
to [12].
The Gibbs distribution and its partition function can be realized from the statistics of random
MAP perturbations with the Gumbel distribution (see Theorem 1), [12, 17, 21, 5]. Recently,
[16, 9, 17, 21, 6] explore the different aspects of random MAP predictions with low dimensional
perturbation. [16] describe sampling from the Gaussian distribution with random Gaussian perturbations. [17] show that random MAP predictors with low dimensional perturbations share similar
statistics as the Gibbs distribution. [21] describe the Bayesian perspectives of these models and their
efficient sampling procedures. [9, 6] consider the generalization properties of such models within
PAC-Bayesian theory. In our work we formally relate random MAP perturbations and the Gibbs
distribution. Specifically, we describe the case for which the marginal probabilities of random MAP
perturbations, with the proper expansion, approximate those of the Gibbs distribution. We also
show how to use the statistics of random MAP perturbations to generate unbiased samples from
the Gibbs distribution. These probability models generate samples efficiently thorough optimization: they have statistical advantages over purely variational approaches such as tree re-weighted
belief propagation [22] or max-marginals [10], and they are faster than standard Gibbs samplers and
Markov chain Monte Carlo approaches when MAP prediction is efficient [3, 25]. Other methods
that efficiently produce samples include Herding [23] and determinantal processes [13].
Our suggested samplers for the Gibbs distribution are based on low dimensional representation of
the partition function, [5]. We augment their results in a few ways. In Lemma 2 we refine their
upper bound, to a series of sequentially tighter bounds. Corollary 2 shows that the approximation
scheme of [5] is in fact a lower bound that holds in probability. Lower bounds for the partition function have been extensively developed in the recent years within the context of variational methods.
Structured mean-field methods are inner-bound methods where a simpler distribution is optimized
as an approximation to the posterior in a KL-divergence sense [8, 1, 14]. The difficulty comes
from non-convexity of the set of feasible distributions. Surprisingly, [20, 18] have shown that the
sum-product belief propagation provides a lower bound to the partition function for super-modular
potential functions. This result is based on the four function theorem which considers nonnegative
functions over distributive lattices.
8
Discussion
This work explores new approaches to sample from the Gibbs distribution. Sampling from the Gibbs
distribution is key problem in machine learning. Traditional approaches, such as Gibbs sampling,
fail in the ?high-signal high-coupling? regime that results in ragged energy landscapes. Following
[17, 21], we showed here that one can take advantage of efficient MAP solvers to generate approximate or unbiased samples from the Gibbs distribution, when we randomly perturb the potential
function. Since MAP predictions are not affected by ragged energy landscapes, our approach excels
in the ?high-signal high-coupling? regime. As a by-product to our approach we constructed lower
bounds to the partition functions, which are both tighter and faster than the previous approaches in
the ?high-signal high-coupling? regime.
Our approach is based on random MAP perturbations that estimate the partition functions with
expectation. In practice we compute the empirical mean. [15] show that the deviation of the sampled
mean from its expectation decays exponentially.
The computational complexity of our approximate sampling procedure is determined by the perturbations dimension. Currently, our theory do not describe the success of the probability model that is
based on the maximal argument of perturbed MAP program with local perturbations.
8
References
[1] Alexandre Bouchard-C?ot?e and Michael I Jordan. Optimization of structured mean field objectives. In AUAI, pages 67?74, 2009.
[2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts.
PAMI, 2001.
[3] L.A. Goldberg and M. Jerrum. The complexity of ferromagnetic ising with local fields. Combinatorics Probability and Computing, 16(1):43, 2007.
[4] E.J. Gumbel and J. Lieblein. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Govt. Print. Office, 1954.
[5] T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. In Proceedings of the 29th International Conference on Machine Learning, 2012.
[6] T. Hazan, S. Maji, Keshet J., and T. Jaakkola. Learning efficient random maximum a-posteriori
predictors with non-decomposable loss functions. Advances in Neural Information Processing
Systems, 2013.
[7] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the ising model.
SIAM Journal on computing, 22(5):1087?1116, 1993.
[8] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational
methods for graphical models. Machine learning, 37(2):183?233, 1999.
[9] J. Keshet, D. McAllester, and T. Hazan. Pac-bayesian approach for minimization of phoneme
error rate. In ICASSP, 2011.
[10] Pushmeet Kohli and Philip HS Torr. Measuring uncertainty in graph cut solutions?efficiently
computing min-marginal energies using dynamic graph cuts. In ECCV, pages 30?43. 2006.
[11] D. Koller and N. Friedman. Probabilistic graphical models. MIT press, 2009.
[12] S. Kotz and S. Nadarajah. Extreme value distributions: theory and applications. World Scientific Publishing Company, 2000.
[13] A. Kulesza and B. Taskar. Structured determinantal point processes. In Proc. Neural Information Processing Systems, 2010.
[14] Qiang Liu and Alexander T Ihler. Negative tree reweighted belief propagation. arXiv preprint
arXiv:1203.3494, 2012.
[15] Francesco Orabona, Tamir Hazan, Anand D Sarwate, and Tommi. Jaakkola. On measure concentration of random maximum a-posteriori perturbations. arXiv:1310.4227, 2013.
[16] G. Papandreou and A. Yuille. Gaussian sampling by local perturbations. In Proc. Int. Conf. on
Neural Information Processing Systems (NIPS), pages 1858?1866, December 2010.
[17] G. Papandreou and A. Yuille. Perturb-and-map random fields: Using discrete optimization to
learn and sample from energy models. In ICCV, Barcelona, Spain, November 2011.
[18] Nicholas Ruozzi. The bethe partition function of log-supermodular graphical models. arXiv
preprint arXiv:1202.6035, 2012.
[19] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for
MAP using message passing. In Conf. Uncertainty in Artificial Intelligence (UAI), 2008.
[20] E.B. Sudderth, M.J. Wainwright, and A.S. Willsky. Loop series and Bethe variational bounds
in attractive graphical models. Advances in neural information processing systems, 20, 2008.
[21] D. Tarlow, R.P. Adams, and R.S. Zemel. Randomized optimum models for structured prediction. In Proceedings of the 15th Conference on Artificial Intelligence and Statistics, 2012.
[22] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log
partition function. Trans. on Information Theory, 51(7):2313?2335, 2005.
[23] Max Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1121?1128. ACM, 2009.
[24] T. Werner. High-arity interactions, polyhedral relaxations, and cutting plane algorithm for soft
constraint optimisation (map-mrf). In CVPR, pages 1?8, 2008.
[25] J. Zhang, H. Liang, and F. Bai. Approximating partition functions of the two-state spin system.
Information Processing Letters, 111(14):702?710, 2011.
9
| 4962 |@word kohli:1 h:1 middle:3 polynomial:3 seek:1 tr:1 recursively:1 bai:1 configuration:9 series:3 liu:1 interestingly:1 recovered:1 comparing:3 current:2 readily:1 parsing:1 determinantal:2 chicago:1 partition:40 happen:1 analytic:1 plot:1 intelligence:2 devising:1 imitate:1 plane:2 accepting:1 tarlow:1 provides:4 simpler:2 zhang:1 along:1 constructed:1 descendant:2 inside:4 polyhedral:1 introduce:2 pairwise:3 expected:5 behavior:1 company:1 encouraging:1 talya:1 solver:2 increasing:1 considering:1 begin:1 estimating:4 moreover:1 provided:1 spain:1 medium:1 textbook:1 developed:1 finding:2 guarantee:3 ragged:4 every:20 thorough:1 expands:1 auai:1 exactly:2 omit:2 yn:1 before:1 positive:1 local:12 approximately:1 pami:1 might:1 suggests:4 challenging:1 limited:1 gone:1 averaged:1 directed:2 practical:2 globerson:1 union:1 lost:1 practice:1 xr:4 procedure:10 area:1 empirical:1 maxx:9 reject:3 dictate:1 significantly:2 protein:1 suggest:1 cannot:3 close:1 context:1 restriction:1 map:51 independently:2 simplicity:1 formalized:1 decomposable:1 deriving:1 ity:1 stability:1 notion:2 traditionally:1 construction:1 play:1 parser:1 user:4 exact:1 programming:1 us:2 goldberg:1 element:2 expensive:1 approximated:1 cut:6 ising:2 role:2 taskar:1 reducible:2 preprint:2 wang:1 region:1 ferromagnetic:1 cycle:7 e8:1 substantial:1 convexity:1 complexity:8 dynamic:1 dom:3 trained:1 solving:3 purely:1 yuille:2 triangle:1 translated:1 icassp:1 joint:1 polygon:1 maji:2 separated:1 fast:1 describe:7 effective:2 monte:2 artificial:2 zemel:1 labeling:1 outside:1 configura:1 whose:3 modular:1 widely:1 valued:2 cvpr:1 drawing:1 otherwise:1 relax:1 ability:2 statistic:11 jerrum:2 advantage:2 propose:1 interaction:2 product:6 maximal:2 loop:1 realization:1 parent:4 requirement:1 optimum:1 produce:5 adam:1 tti:1 object:2 derive:1 coupling:10 strong:1 recovering:1 involves:1 implies:3 come:1 tommi:2 annotated:1 mcallester:1 require:1 suffices:1 fix:2 generalization:1 decompose:1 probable:7 tighter:2 summation:4 hold:16 around:1 swendsen:2 ground:1 exp:24 proc:2 applicable:1 label:1 currently:1 create:2 tool:1 weighted:4 minimization:2 mit:2 rough:2 gaussian:3 super:3 aim:1 rather:2 pn:7 varying:2 jaakkola:7 office:1 corollary:5 derived:3 notational:1 likelihood:1 sense:1 glass:7 posteriori:5 inference:8 wang1:1 typically:1 accept:1 koller:1 expand:1 interested:1 subhransu:1 pixel:5 arg:3 overall:1 maxxj:3 augment:2 exponent:2 marginal:7 equal:1 construct:2 field:10 shaped:1 sampling:37 qiang:1 biology:1 represents:1 thin:1 foreground:1 discrepancy:3 np:1 simplify:1 few:2 randomly:2 simultaneously:1 preserve:2 comprehensive:1 divergence:1 connects:1 friedman:1 interest:2 acceptance:1 message:1 investigate:2 extreme:4 swapping:1 chain:3 subtrees:1 accurate:3 edge:3 partial:1 xy:1 tree:8 logarithm:2 haifa:1 re:4 soft:1 tial:1 papandreou:2 measuring:1 werner:1 assignment:17 maximization:6 loopy:1 lattice:1 introducing:2 vertex:2 subset:9 euler:1 deviation:2 predictor:4 veksler:1 successful:1 kn:1 perturbed:2 explores:1 randomized:4 international:2 siam:1 csail:1 probabilistic:2 invoke:1 off:2 physic:1 michael:1 connecting:1 worse:4 sinclair:1 creating:1 conf:2 leading:1 account:1 potential:19 int:1 combinatorics:1 piece:1 tion:1 performed:1 hazan:5 bouchard:1 annotation:4 oi:1 spin:10 variance:2 phoneme:1 efficiently:11 correspond:1 landscape:4 bayesian:3 rejecting:2 produced:1 carlo:2 drive:1 straight:1 herding:2 whenever:6 energy:7 associated:1 proof:8 mi:5 ihler:1 sampled:7 dataset:1 carefully:1 focusing:1 alexandre:1 attained:2 higher:1 supermodular:1 follow:2 specify:1 wei:1 editing:1 done:1 evaluated:2 lastly:1 hand:2 horizontal:1 replacing:1 propagation:10 lack:1 quality:1 scientific:1 verify:1 unbiased:15 true:2 contain:1 alternating:1 iteratively:3 nadarajah:1 attractive:6 round:1 adjacent:1 reweighted:1 self:2 m:1 prominent:1 demonstrate:1 performs:1 reasoning:2 image:6 variational:5 recently:3 boykov:1 common:1 empirically:2 perturbing:1 exponentially:2 volume:1 sarwate:1 extend:1 interpretation:1 approximates:2 marginals:1 refer:2 gibbs:55 pmi:3 pm:3 grid:3 similarly:1 language:1 replicating:1 moving:2 dominant:1 posterior:2 exclusion:1 recent:2 perspective:1 showed:1 inequality:2 binary:1 arbitrarily:1 continue:1 success:1 mr:1 determine:1 signal:6 full:5 multiple:6 faster:3 characterized:1 molecular:1 feasibility:1 prediction:14 mrf:1 vision:1 expectation:11 optimisation:1 arxiv:5 iteration:2 normalization:1 background:2 whereas:1 sudderth:1 ot:1 december:1 anand:1 jordan:2 structural:1 leverage:1 counting:1 identically:1 meltzer:2 iterate:1 xj:15 inner:1 computable:2 chebyshev:1 whether:1 expression:1 effort:1 sontag:1 passing:1 proportionally:1 involve:1 extensively:1 zabih:1 generate:8 estimated:1 disjoint:2 ruozzi:1 discrete:6 affected:1 key:2 four:1 nevertheless:1 clarity:2 pj:3 graph:19 relaxation:3 sum:5 year:1 run:1 package:1 parameterized:1 noticing:1 exponentiation:1 uncertainty:2 letter:1 family:2 reader:2 kotz:1 separation:6 draw:1 bound:50 ki:8 layer:3 guaranteed:1 lieblein:1 refine:2 nonnegative:1 annual:1 strength:1 constraint:1 scene:1 generates:1 aspect:1 argument:2 min:1 expanded:2 structured:7 developing:1 according:7 remain:1 slightly:2 across:1 describes:2 appealing:1 metropolis:2 lp:1 intuitively:1 restricted:2 explained:1 iccv:1 taken:1 computationally:3 equation:3 fail:1 know:1 operation:6 tightest:1 apply:1 nicholas:1 alternative:1 original:4 running:3 cf:1 include:2 publishing:1 graphical:12 perturb:3 build:1 k1:1 ghahramani:1 classical:1 approximating:1 seeking:1 move:1 objective:1 arrangement:1 added:1 quantity:1 realized:1 print:1 concentration:1 traditional:1 gradient:1 excels:3 mapped:1 concatenation:1 restart:1 distributive:1 philip:1 originate:1 considers:1 reason:1 induction:1 willsky:2 modeled:1 index:1 providing:1 equivalently:1 liang:1 unfortunately:1 relate:1 negative:2 stated:1 tightening:1 design:1 proper:1 perform:1 upper:9 vertical:2 observation:1 francesco:1 markov:5 november:1 t:1 defining:1 extended:5 perturbation:44 cast:1 required:1 namely:3 pair:1 connection:1 optimized:1 kl:1 accepts:5 barcelona:1 nip:1 trans:1 able:1 suggested:1 usually:1 below:1 dynamical:1 regime:7 kulesza:1 program:1 including:2 max:24 belief:10 wainwright:2 natural:2 difficulty:2 recursion:1 scheme:4 kj:1 understanding:1 geometric:1 loss:1 lecture:1 proportional:1 proven:1 share:1 eccv:1 surprisingly:3 copy:4 keeping:1 side:2 saul:1 taking:2 distributed:4 boundary:12 dimension:6 xn:26 evaluating:1 cumulative:1 tamir:2 overcome:1 world:1 forward:1 collection:5 pushmeet:1 welling:1 approximate:21 emphasize:1 cutting:2 implicitly:1 sequentially:2 active:1 uai:1 xi:26 modularity:2 bethe:2 mj:1 learn:3 expanding:1 inherently:3 ignoring:1 argmaxx:1 interact:1 expansion:1 complex:1 domain:2 did:1 main:1 linearly:1 bounding:1 child:1 x1:16 augmented:1 positively:1 sub:1 exponential:5 tied:4 theorem:14 specific:1 pac:2 arity:1 maxi:1 explored:1 x:4 decay:1 exists:1 sequential:2 importance:1 keshet:2 magnitude:1 execution:1 conditioned:1 gumbel:12 gap:2 easier:1 rejection:2 intersection:2 likely:4 explore:1 corresponds:1 truth:1 acm:1 succeed:1 goal:1 marked:2 orabona:1 feasible:2 hard:4 specifically:4 typical:1 except:1 uniformly:4 sampler:12 determined:2 averaging:3 lemma:8 torr:1 called:2 accepted:1 formally:1 alexander:1 evaluate:2 correlated:1 |
4,379 | 4,963 | EDML for Learning Parameters in
Directed and Undirected Graphical Models
Khaled S. Refaat, Arthur Choi, Adnan Darwiche
Computer Science Department
University of California, Los Angeles
{krefaat,aychoi,darwiche}@cs.ucla.edu
Abstract
EDML is a recently proposed algorithm for learning parameters in Bayesian networks. It was originally derived in terms of approximate inference on a metanetwork, which underlies the Bayesian approach to parameter estimation. While
this initial derivation helped discover EDML in the first place and provided a concrete context for identifying some of its properties (e.g., in contrast to EM), the
formal setting was somewhat tedious in the number of concepts it drew on. In this
paper, we propose a greatly simplified perspective on EDML, which casts it as
a general approach to continuous optimization. The new perspective has several
advantages. First, it makes immediate some results that were non-trivial to prove
initially. Second, it facilitates the design of EDML algorithms for new graphical
models, leading to a new algorithm for learning parameters in Markov networks.
We derive this algorithm in this paper, and show, empirically, that it can sometimes
learn estimates more efficiently from complete data, compared to commonly used
optimization methods, such as conjugate gradient and L-BFGS.
1
Introduction
EDML is a recently proposed algorithm for learning MAP parameters of a Bayesian network from
incomplete data [5, 16]. While it is procedurally very similar to Expectation Maximization (EM) [7,
11], EDML was shown to have certain advantages, both theoretically and practically. Theoretically,
EDML can in certain specialized cases provably converge in one iteration, whereas EM may require
many iterations to solve the same learning problem. Some empirical evaluations further suggested
that EDML and hybrid EDML/EM algorithms can sometimes find better parameter estimates than
vanilla EM, in fewer iterations and less time. EDML was originally derived in terms of approximate
inference on a meta-network used for Bayesian approaches to parameter estimation. This graphical
representation of the estimation problem lent itself to the initial derivation of EDML, as well to the
identification of certain key theoretical properties, such as the one we just described. The formal
details, however, can be somewhat tedious as EDML draws on a number of different concepts. We
review EDML in such terms in the supplementary appendix.
In this paper, we propose a new perspective on EDML, which views it more abstractly in terms of
a simple method for continuous optimization. This new perspective has a number of advantages.
First, it makes immediate some results that were previously obtained for EDML, but through some
effort. Second, it facilitates the design of new EDML algorithms for new classes of models, where
graphical formulations of parameter estimation, such as meta-networks, are lacking. Here, we derive, in particular, a new parameter estimation algorithm for Markov networks, which is in many
ways a more challenging task, compared to the case of Bayesian networks. Empirically, we find that
EDML is capable of learning parameter estimates, under complete data, more efficiently than popular methods such as conjugate-gradient and L-BFGS, and in some cases, by an order-of-magnitude.
1
This paper is structured as follows. In Section 2, we highlight a simple iterative method for, approximately, solving continuous optimization problems. In Section 3, we formulate the EDML algorithm
for parameter estimation in Bayesian networks, as an instance of this optimization method. In Section 4, we derive a new EDML algorithm for Markov networks, based on the same perspective. In
Section 5, we contrast the two EDML algorithms for directed and undirected graphical models, in
the complete data case. We empirically evaluate our new algorithm for parameter estimation under
complete data in Markov networks, in Section 6; review related work in Section 7; and conclude in
Section 8. Proofs of theorems appear in the supplementary appendix.
2
An Approximate Optimization of Real-Valued Functions
Consider a real-valued objective function f (x) whose input x is a vector of components:
x = (x1 , . . . , xi , . . . , xn ),
where each component xi is a vector in Rki for some ki . Suppose further that we have a constraint
on the domain of function f (x) with a corresponding function g that maps an arbitrary point x to a
point g(x) satisfying the given constraint. We say in this case that g(x) is a feasibility function and
refer to the points in its range as feasible points.
Our goal here is to find a feasible input vector x = (x1 , . . . , xi , . . . , xn ) that optimizes the function f (x). Given the difficulty of this optimization problem in general, we will settle for finding
stationary points x in the constrained domain of function f (x).
One approach for finding such stationary points is as follows. Let x? = (x?1 , . . . , x?i , . . . , x?n ) be a
feasible point in the domain of function f (x). For each component xi , we define a sub-function
fx? (xi ) = f (x?1 , . . . , x?i?1 , xi , x?i+1 , . . . , x?n ).
That is, we use the n-ary function f (x) to generate n sub-functions fx? (xi ). Each of these subfunctions is obtained by fixing all inputs xj of f (x), for j 6= i, to their values in x? , while keeping
the input xi free. We further assume that these sub-functions are subject to the same constraints that
the function f (x) is subject to.
We can now characterize all feasible points x? that are stationary with respect to the function f (x),
in terms of local conditions on sub-functions fx? (xi ).
Claim 1 A feasible point x? = (x?1 , . . . , x?i , . . . , x?n ) is stationary for function f (x) iff for all i,
component x?i is stationary for sub-function fx? (xi ).
This is immediate from the definition of a stationary point. Assuming no constraints, at a stationary
point x? , the gradient ?f (x? ) = 0, i.e., ?xi f (x? ) = ?fx? (x?i ) = 0 for all xi , where ?xi f (x? )
denotes the sub-vector of gradient ?f (x? ) with respect to component xi .1
With these observations, we can now search for feasible stationary points x? of the constrained
function f (x) using an iterative method that searches instead for stationary points of the constrained
sub-functions fx? (xi ). The method works as follows:
1. Start with some feasible point xt of function f (x) for t = 0
2. While some xti is not a stationary point for constrained sub-function fxt (xi )
(a) Find a stationary point yit+1 for each constrained sub-function fxt (xi )
(b) xt+1 = g(y t+1 )
(c) Increment t
The real computational work of this iterative procedure is in Steps 2(a) and 2(b), although we shall
see later that such steps can, in some cases, be performed efficiently. With an appropriate feasibility
function g(y), one can guarantee that a fixed-point of this procedure yields a stationary point of the
constrained function f (x), by Claim 1.2 Further, any stationary point is trivially a fixed-point of this
procedure (one can seed this procedure with such a point).
1
2
Under constraints, we consider points that are stationary with respect to the corresponding Lagrangian.
We discuss this point further in the supplementary appendix.
2
As we shall show in the next section, the EDML algorithm?which has been proposed for parameter
estimation in Bayesian networks?is an instance of the above procedure with some notable observations: (1) the sub-functions fxt (xi ) are convex and have unique optima; (2) these sub-functions
have an interesting semantics, as they correspond to posterior distributions that are induced by Naive
Bayes networks with soft evidence asserted on them; (3) defining these sub-functions requires inference in a Bayesian network parameterized by the current feasible point xt ; (4) there are already several convergent, fixed-point iterative methods for finding the unique optimum of these sub-functions;
and (5) these convergent methods produce solutions that are always feasible and, hence, the feasibility function g(y) corresponds to the identity function g(y) = y in this case.
We next show this connection to EDML as proposed for parameter estimation in Bayesian networks.
We follow by deriving an EDML algorithm (another instance of the above procedure), but for parameter estimation in undirected graphical models. We will also study the impact of having complete
data on both versions of the EDML algorithm, and finally evaluate the new instance of EDML by
comparing it to conjugate gradient and L-BFGS when applied to complete datasets.
3
EDML for Bayesian Networks
From here on, we use upper case letters (X) to denote variables and lower case letters (x) to denote
their values. Variable sets are denoted by bold-face upper case letters (X) and their instantiations
by bold-face lower case letters (x). Generally, we will use X to denote a variable in a Bayesian
network and U to denote its parents. A network parameter will therefore have the general form
?x|u , representing the probability Pr (X = x|U = u).
Consider a (possibly incomplete) dataset D with examples d1 , . . . , dN , and a Bayesian network with
parameters ?. Our goal is to find parameter estimates ? that minimize the negative log-likelihood:
N
X
log Pr ? (di ).
(1)
f (?) = ?``(?|D) = ?
i=1
Here, ? = (. . . , ?X|u , . . .) is a vector over the network parameters. Moreover, Pr ? is the distribution
induced by the Bayesian network structure under parameters ?. As such, Pr ? (di ) is the probability
of observing example di in dataset D under parameters ?.
Each component of ? is a parameter set ?X|u , which defines a parameter ?x|u for each value x of
variable X and instantiation u of its parents U. The
Pfeasibility constraint here is that each component
?X|u satisfies the convex sum-to-one constraint: x ?x|u = 1.
The above parameter estimation problem is clearly in the form of the constrained optimization problem that we phrased in the previous section and, hence, admits the same iterative procedure proposed
in that section for finding stationary points. The relevant questions now are: What form do the subfunctions f?? (?X|u ) take in this context? What are their semantics? What properties do they have?
How do we find their stationary points? What is the feasibility function g(y) in this case? Finally,
what is the connection to previous work on EDML? We address these questions next.
3.1
Form
We start by characterizing the sub-functions of the negative log-likelihood given in Equation 1.
Theorem 1 For each parameter set ?X|u , the negative log-likelihood of Equation 1 has the subfunction:
N
X
X
i
f?? (?X|u ) = ?
log Cui +
Cx|u
? ?x|u
(2)
x
i=1
i
where Cui and Cx|u
are constants that are independent of parameter set ?X|u , given by
Cui = Pr ?? (di ) ? Pr ?? (u, di )
and
i
?
Cx|u
= Pr ?? (x, u, di )/?x|u
To compute the constants C i , we require inference on a Bayesian network with parameters ?? .3
3
?
i
Theorem 1 assumes tacitly that ?x|u
6= 0. More generally, however, Cx|u
= ?Pr ?? (di )/??x|u , which
can also be computed using some standard inference algorithms [6, 14].
3
!X
X1
X2
?1
?2
?
?
?
?
XN
?N
Figure 1: Estimation given independent soft observations.
3.2
Semantics
Equation 2 has an interesting semantics, as it corresponds to the negative log-likelihood of a root
variable in a naive Bayes structure, on which soft, not necessarily hard, evidence is asserted [5].4
This model is illustrated in Figure 1, where our goal is to estimate a parameter set ?X , given soft
observations ? = (?1 , . . . , ?N ) on variables X1 , . . . , XN , where each ?i has a strength specified by
a weight on each value xi of Xi . If we denote the distribution of this model by P, then (1) P(?)
denotes a prior over parameters sets,5 (2) P(xi |?X = (. . . , ?x , . . .)) = ?x , and (3) weights P(?i |xi )
denote the strengths of soft evidence ?i on value xi . The log likelihood of our soft observations ?
is:
N
N
X
X
X
X
log
log
log P(?|?X ) =
P(?i |xi )P(xi |?X ) =
P(?i |xi ) ? ?x
(3)
xi
i=1
i=1
xi
The following result connects Equation 2 to the above likelihood of a soft dataset, when we now
want to estimate the parameter set ?X|u , for a particular variable X and parent instantiation u.
Theorem 2 Consider Equations 2 and 3, and assume that each soft evidence ?i has the strength
i
P(?i |xi ) = Cui + Cx|u
. It then follows that
f?? (?X|u ) = ? log P(?|?X|u )
(4)
This theorem yields the following interesting semantics for EDML sub-functions. Consider a parameter set ?X|u and example di in our dataset. The example can then be viewed as providing
?votes? on what this parameter set should be. In particular, the vote of example di for value x takes
the form of a soft evidence ?i whose strength is given by
?
P(?i |xi ) = Pr ?? (di ) ? Pr ?? (u, di ) + Pr ?? (x, u, di )/?x|u
The sub-function is then aggregating these votes from different examples and producing a corresponding objective function on parameter set ?X|u . EDML optimizes this objective function to
produce the next estimate for each parameter set ?X|u .
3.3
Properties
Equation 2 is a convex function, and thus has a unique optimum.6 In particular, we have logs of a
linear function, which are each concave. The sum of two concave functions is also concave, thus our
sub-function f?? (?X|u ) is convex, and is subject to a convex sum-to-one constraint [16]. Convex
functions are relatively well understood, and there are a variety of methods and systems that can be
used to optimize Equation 2; see, e.g., [3]. We describe one such approach, next.
3.4
Finding the Unique Optima
In every EDML iteration, and for each parameter set ?X|u , we seek the unique optimum for each
sub-function f?? (?X|u ), given by Equation 2. Refaat, et al., has previously proposed a fixed-point
4
Soft evidence is an observation that increases or decreases ones belief in an event, but not necessarily to
the point of certainty. For more on soft evidence, see [4].
5
Typically, we assume Dirichlet priors for MAP estimation. However, we focus on ML estimation here.
6
More specifically, strict convexity implies a unique optimum, although under certain assumptions, we can
guarantee that Equation 2 is indeed strictly convex.
4
algorithm that monotonically improves the objective, and is guaranteed to converge [16]. Moreover,
the solutions it produces already satisfy the convex sum-to-one constraint and, hence, the feasibility
function g ends up being the identity function g(?) = ?.
t
In particular, we start with some initial feasible estimates ?X|u
at iteration t = 0, and then apply the
following update equation until convergence:
N
i
i
t
1 X (Cu + Cx|u ) ? ?x|u
t+1
P
?x|u
=
(5)
N i=1 Cui + x0 Cxi 0 |u ? ?xt 0 |u
Note here that constants C i are computed by inference on a Bayesian network structure under parameters ?t (see Theorem 1 for the definitions of these constants). Moreover, while the above procedure
is convergent when optimizing sub-functions f?? (?X|u ), the global EDML algorithm that is optimizing function f (?) may not be convergent in general.
3.5
Connection to Previous Work
EDML was originally derived by applying an approximate inference algorithm to a meta-network,
which is typically used in Bayesian approaches to parameter estimation [5, 16]. This previous
formulation of EDML, which is specific to Bayesian networks, now falls as a special instance of
the one given in Section 2. In particular, the ?sub-problems? defined by the original EDML [5, 16]
correspond precisely to the sub-functions f?? (?X|u ) described here. Further, both versions of EDML
are procedurally identical when they both use the same method for optimizing these sub-functions.
The new formulation of EDML is more transparent, however, at least in revealing certain properties
of the algorithm. For example, it now follows immediately (from Section 2) that the fixed points
of EDML are stationary points of the log-likelihood?a fact that was not proven until [16], using a
technique that appealed to the relationship between EDML and EM. Moreover, the proof that EDML
under complete data will converge immediately to the optimal estimates is also now immediate (see
Section 5). More importantly though, this new formulation provides a systematic procedure for
deriving new instances of EDML for additional models, beyond Bayesian networks. Indeed, in the
next section, we use this procedure to derive an EDML instance for Markov networks, which is
followed by an empirical evaluation of the new algorithm under complete data.
4
EDML for Undirected Models
In this section, we show how parameter estimation for undirected graphical models, such as Markov
networks, can also be posed as an optimization problem, as described in Section 2.
For Markov networks, ? = (. . . , ?Xa , . . .) is a vector over the network parameters. Component ?Xa
is a parameter set for a (tabular) factor a, assigning a number ?xa ? 0 for each instantiation xa of
variables Xa . The negative log-likelihood ?``(?|D) for a Markov network is:
N
X
?``(?|D) = N log Z? ?
log Z? (di )
(6)
i=1
where Z? is the partition function, and where Z? (di ) is the partition function after conditioning on
example di , under parameterization ?. Sub-functions with respect to Equation 6 may not be convex,
as was the case in Bayesian networks. Consider instead the following objective function, which we
shall subsequently relate to the negative log-likelihood:
N
X
f (?) = ?
log Z? (di ),
(7)
i=1
with a feasibility constraint that the partition function Z? equals some constant ?. The following result tells us that it suffices to optimize Equation 7 under the given constraint, to optimize Equation 6.
Theorem 3 Let ? be a positive constant, and let g(?) be a (feasibility) function satisfying Zg(?) = ?
and g(?xa ) ? ?xa for all ?xa .7 For every point ?, if g(?) is optimal for Equation 7, subject to its
7
Here, g(?xa ) denotes the component of g(?) corresponding to ?xa . Moreover, the function g(?) can be
constructed, e.g., by simply multiplying all entries of one parameter set by ?/Z? . In our experiments, we
5
constraint, then it is also optimal for Equation 6. Moreover, a point ? is stationary for Equation 6
iff the point g(?) is stationary for Equation 7, subject to its constraint.
With Equation 7 as a new (constrained) objective function for estimating the parameters of a Markov
network, we can now cast it in the terms of Section 2. We start by characterizing its sub-functions.
Theorem 4 For a given parameter set ?Xa , the objective function of Equation 7 has sub-functions:
f?? (?Xa ) = ?
N
X
i=1
log
X
Cxi a ? ?xa
subject to
xa
X
Cxa ? ?xa = ?
(8)
xa
where Cxi a and Cxa are constants that are independent of the parameter set ?Xa :
Cxi a = Z?? (xa , di )/?x? a
and
Cxa = Z?? (xa )/?x? a .
Note that computing these constants requires inference on a Markov network with parameters ?? .8
Interestingly, this sub-function is convex, as well as the constraint (which is now linear), resulting in
a unique optimum, as in Bayesian networks. However, even when ?? is a feasible point, the unique
optima of these sub-functions may not be feasible when combined. Thus, the feasibility function
g(?) of Theorem 3 must be utilized in this case.
We now have another instance of the iterative algorithm proposed in Section 2, but for undirected
graphical models. That is, we have just derived an EDML algorithm for such models.
5
EDML under Complete Data
We consider now how EDML simplifies under complete data for both Bayesian and Markov networks, identifying forms and properties of the corresponding sub-functions under complete data.
We start with Bayesian networks. Consider a variable X, and a parent instantiation u; and let
D#(xu) represent the number of examples that
Pcontain xu in the complete dataset D. Equation 2
of Theorem 1 then reduces to: f?? (?X|u ) = ? x D#(xu) log ?x|u + C, where C is a constant that
is independent of parameter set ?X|u . Assuming that ?? is feasible (i.e., each ?X|u satisfies the sumto-one constraint), the unique optimum of this sub-function is ?x|u = D#(xu)
D#(u) , which is guaranteed
to yield a feasible point ?, globally. Hence, EDML produces the unique optimal estimates in its first
iteration and terminates immediately thereafter.
The situation is different, however, for Markov
P networks. Under a complete dataset D, Equation 8
of Theorem 4 reduces to: f?? (?Xa ) = ? xa D#(xa ) log ?xa + C, where C is a constant that
is independent of parameter set ?Xa . Assuming that ?? is feasible (i.e., satisfies Z?? = ?), the
? D#(xa )
unique optimum of this sub-function has the closed form: ?xa = N
Cxa , which is equivalent
to the unique optimum one would obtain in a sub-function for Equation 6 [15, 13]. Contrary to
Bayesian networks, the collection of these optima for different parameter sets do not necessarily
yield a feasible point ?. Hence, the feasibility function g of Theorem 3 must be applied here.
The resulting feasible point, however, may no longer be a stationary point for the corresponding
sub-functions, leading EDML to iterate further. Hence, under complete data, EDML for Bayesian
networks converges immediately, while EDML for Markov networks may take multiple iterations.
Both results are consistent with what is already known in the literature on parameter estimation
for Bayesian and Markov networks. The result on Bayesian networks is useful in confirming that
EDML performs optimally in this case. The result for Markov networks, however, gives rise to a
new algorithm for parameter estimation under complete data. We evaluate the performance of this
new EDML algorithm after considering the following example.
Let D be a complete dataset over three variables A, B and C, specified in terms of the number
of times that each instantiation a, b, c appears in D. In particular, we have the following counts:
normalize each parameter set to sum-to-one, but then update the constant ? = Z?t for the subsequent iteration.
8
Theorem 4 assumes that ?x? a 6= 0. In general, Cxi a =
6
?Z?? (di )
,
??xa
and Cxa =
?Z??
??xa
. See also Footnote 3.
Table 1: Speed-up results of EDML over CG and L-BFGS
problem
#vars
zero
256
one
256
two
256
three
256
four
256
five
256
six
256
seven
256
eight
256
nine
256
54.wcsp
67
or-chain-42
385
or-chain-45
715
or-chain-147
410
or-chain-148
463
or-chain-225
467
rbm20
40
Seg2-17
228
Seg7-11
235
Family2Dominant.1.5loci
385
Family2Recessive.15.5loci
385
grid10x10.f5.wrap
100
grid10x10.f10.wrap
100
average
275.65
icg
iedml
tcg
(S)
45
105 3.62 3.90x
104
73 8.25 13.26x
46
154 3.73 2.83x
43
169 3.58 2.52x
56
126 4.59 4.31x
43
155 3.48 2.70x
48
150 3.93 3.13x
57
147 4.64 3.37x
48
155 3.82 2.84x
56
168 4.46 3.15x
107.33 160.33 6.56 2.78x
120.33
27 0.12 31.27x
151 33.67 0.14 12.52x
107.67 18.67 3.27 80.72x
122.67 42.33 1.00 49.04x
181.33
58 0.79 44.14x
9
41 30.98 2.38x
63 83.66 1.77 7.00x
54.3
84 1.86 2.84x
117.33
88 2.39 5.90x
111.6
89.7 1.31 3.85x
136.67
239 17.36 6.26x
101.33 62.33 12.39 20.92x
83.89 101.29 5.39 13.55x
il-bfgs
i0edml tl-bfgs
(S 0 )
24
74
1.64 1.98x
58
42
3.87 8.08x
21
87
1.54 1.54x
52
169
3.55 1.93x
61
115
3.90 3.22x
49
155
3.20 1.90x
20
90
1.47 1.40x
23
89
1.65 1.62x
57
154
3.83 2.28x
45
141
2.90 1.94x
68.33
172
1.80 0.72x
110 54.33
0.06 6.43x
94.33 36.33
0.06 4.85x
105 58.33
1.63 12.77x
80
32
0.28 14.24x
137.67
69
0.33 10.76x
30 107.22 30.18 0.99x
46.67 64.67
0.74 4.14x
48.66 73.33
1.27 2.32x
85.67 78.33
1.04 2.69x
86.33 81.67
0.74 2.18x
142 180.33 10.30 4.63x
92.67
59
5.94 9.70x
66.84 94.89
3.56 4.45x
D#(a, b, c) = 4, D#(a, b, c?) = 18, D#(a, ?b, c) = 2, D#(a, ?b, c?) = 13, D#(?
a, b, c) = 1,
D#(?
a, b, c?) = 1, D#(?
a, ?b, c) = 42, and D#(?
a, ?b, c?) = 19. Suppose we want to learn, from
this dataset, a Markov network with 3 edges, (A, B), (B, C) and (A, C), with the corresponding
?
?
?
parameter sets ?AB , ?BC and ?AC . If the initial set of parameters ?? = (?AB
, ?BC
, ?AC
) is uniform,
?
i.e., ?XY = (1, 1, 1, 1), then Equation 8 gives the sub-function f?? (?AB ) = ?22 ? log ?ab ? 15 ?
log ?a?b ? 2 ? log ?a?b ? 61 ? log ?a??b . Moreover, we have Z?? = 2 ? ?ab + 2 ? ?a?b + 2 ? ?a?b + 2 ? ?a??b .
Minimizing f?? (?AB ) under Z?? = ? = 2 corresponds to solving a convex optimization problem,
22
15
2
61
, 100
, 100
, 100
). We solve similar convex
which has the unique solution: (?ab , ?a?b , ?a?b , ?a?b ) = ( 100
optimization problems for the other parameter sets ?BC and ?AC , to update estimates ?? . We then
apply an appropriate feasibility function g (see Footnote 7), and repeat until convergence.
6
Experimental Results
We evaluate now the efficiency of EDML, conjugate gradient (CG) and Limited-memory BFGS
(L-BFGS), when learning Markov networks under complete data.9 We first learned grid-structured
pairwise MRFs from the CEDAR dataset of handwritten digits, which has 10 datasets (one for each
digit) of 16?16 binary images. We also simulated datasets from networks used in the probabilistic
inference evaluations of UAI-2008, 2010 and 2012, that are amenable to jointree inference.10 For
each network, we simulated 3 datasets of size 210 examples each, and learned parameters using the
original structure. Experiments were run on a 3.6GHz Intel i5 CPU with access to 8GB RAM.
We used the CG implementation in the Apache Commons Math library, and the L-BFGS implementation in Mallet.11 Both are Java libraries, and our implementation of EDML is also in Java. More
importantly, all of the CG, L-BFGS, and EDML methods rely on the same underlying engine for
9
We also considered Iterative Proportional Fitting (IPF) as a baseline. However, IPF does not scale to our
benchmarks, as it invokes inference many times more often than the methods we considered.
10
Network 54.wcsp is a weighted CSP problem; or-chain-{42, 45, 147, 148, 225} are from the Promedas suite; rbm-20 is a restricted Boltzmann machine; Seg2-17, Seg7-11 are from the Segmentation
suite; family2-dominant.1.5loci, family2-recessive.15.5loci are genetic linkage analysis networks; and
grid10x10.f5.wrap, grid10x10.10.wrap are 10x10 grid networks.
11
Available at http://commons.apache.org/ and http://mallet.cs.umass.edu/.
7
exact inference.12 For EDML, we damped parameter estimates at each iteration, which is typical for
algorithms like loopy belief propagation, which EDML was originally inspired by [5].13 We used
Brent?s method with default settings for line search in CG, which was the most efficient over all
univariate solvers in Apache?s library, which we evaluated in initial experiments.
We first run CG until convergence (or after exceeding 30 minutes) to obtain parameter estimates of
some quality qcg (in log likelihood), recording the number of iterations icg and time tcg required
in minutes. EDML is then run next until it obtains an estimate of the same quality qcg , or better,
recording also the number of iterations iedml and time tedml in minutes. The time speed-up S
of EDML over CG is computed as tcg /tedml . We also performed the same comparison with LBFGS instead of CG, recording the corresponding number of iterations (il-bfgs , i0edml ) and time
taken (tl-bfgs , t0edml ), giving us the speed-up of EDML over L-BFGS as S 0 = tl-bfgs /t0edml .
Table 1 shows results for both sets of experiments. It shows the number of variables in each network (#vars), the average number of iterations taken by each algorithm, and the average speed-up
achieved by EDML over CG (L-BFGS).14 On the given benchmarks, we see that on average EDML
was roughly 13.5? faster than CG, and 4.5? faster than L-BFGS. EDML was up to an order-ofmagnitude faster than L-BFGS in some cases. In many cases, EDML required more iterations but
was still faster in time. This is due in part by the number of times inference is invoked by CG and
L-BFGS (in line search), whereas EDML only needs to invoke inference once per iteration.
7
Related Work
As an iterative fixed-point algorithm, we can view EDML as a Jacobi-type method, where updates
are performed in parallel [1]. Alternatively, a version of EDML using Gauss-Seidel iterations would
update each parameter set in sequence using the most recently computed update. This leads to an
algorithm that monotonically improves the log likelihood at each update. In this case, we obtain a
coordinate descent algorithm, Iterative Proportional Fitting (IPF) [9], as a special case of EDML.
The notion of fixing all parameters, except for one, has been exploited before for the purposes of
optimizing the log likelihood of a Markov network, as a heuristic for structure learning [15]. This
notion also underlies the IPF algorithm; see, e.g., [13], Section 19.5.7. In the case of complete data,
the resulting sub-function is convex, yet for incomplete data, it is not necessarily convex.
Optimization methods such as conjugate gradient, and L-BFGS [12], are more commonly used to
optimize the parameters of a Markov network. For relational Markov networks or Markov networks
that otherwise assume a feature-based representation [8], evaluating the likelihood is typically intractable, in which case one typically optimizes instead the pseudo-log-likelihood [2]. For more on
parameter estimation in Markov networks, see [10, 13].
8
Conclusion
In this paper, we provided an abstract and simple view of the EDML algorithm, originally proposed
for parameter estimation in Bayesian networks, as a particular method for continuous optimization.
One consequence of this view is that it is immediate that fixed-points of EDML are stationary points
of the log-likelihood, and vice-versa [16]. A more interesting consequence, is that it allows us to
propose an EDML algorithm for a new class of models, Markov networks. Empirically, we find that
EDML can more efficiently learn parameter estimates for Markov networks under complete data,
compared to conjugate gradient and L-BFGS, sometimes by an order-of-magnitude. The empirical
evaluation of EDML for Markov networks under incomplete data is left for future work.
Acknowledgments
This work has been partially supported by ONR grant #N00014-12-1-0423.
12
For exact inference in Markov networks, we employed a jointree algorithm from the S AM I AM inference
library, http://reasoning.cs.ucla.edu/samiam/.
13
We start with an initial factor of 12 , which we tighten as we iterate.
14
For CG, we used a threshold based on relative change in the likelihood at 10?4 . We used Mallet?s default
convergence threshold for L-BFGS.
8
References
[1] Dimitri P. Bertsekas and John N. Tsitsiklis. Parallel and Distributed Computation: Numerical
Methods. Prentice-Hall, 1989.
[2] J. Besag. Statistical Analysis of Non-Lattice Data. The Statistician, 24:179?195, 1975.
[3] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press,
2004.
[4] Hei Chan and Adnan Darwiche. On the revision of probabilistic beliefs using uncertain evidence. AIJ, 163:67?90, 2005.
[5] Arthur Choi, Khaled S. Refaat, and Adnan Darwiche. EDML: A method for learning parameters in Bayesian networks. In UAI, 2011.
[6] Adnan Darwiche. A differential approach to inference in Bayesian networks.
50(3):280?305, 2003.
JACM,
[7] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society B, 39:1?38, 1977.
[8] Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2009.
[9] Radim Jirousek and Stanislav Preucil. On the effective implementation of the iterative proportional fitting procedure. Computational Statistics & Data Analysis, 19(2):177?189, 1995.
[10] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques.
MIT Press, 2009.
[11] S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Computational Statistics and Data Analysis, 19:191?201, 1995.
[12] D. C. Liu and J. Nocedal. On the Limited Memory BFGS Method for Large Scale Optimization. Mathematical Programming, 45(3):503?528, 1989.
[13] Kevin Patrick Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
[14] James Park and Adnan Darwiche. A differential semantics for jointree algorithms. AIJ,
156:197?216, 2004.
[15] Stephen Della Pietra, Vincent J. Della Pietra, and John D. Lafferty. Inducing features of random
fields. IEEE Trans. Pattern Anal. Mach. Intell., 19(4):380?393, 1997.
[16] Khaled S. Refaat, Arthur Choi, and Adnan Darwiche. New advances and theoretical insights
into EDML. In UAI, pages 705?714, 2012.
9
| 4963 |@word cu:1 version:3 radim:1 tedious:2 adnan:6 jointree:3 seek:1 initial:6 liu:1 uma:1 daniel:1 genetic:1 bc:3 interestingly:1 current:1 comparing:1 assigning:1 yet:1 must:2 john:2 subsequent:1 partition:3 numerical:1 confirming:1 update:7 stationary:21 intelligence:2 fewer:1 parameterization:1 provides:1 math:1 org:1 daphne:1 five:1 mathematical:1 dn:1 constructed:1 differential:2 fxt:3 prove:1 fitting:3 darwiche:7 x0:1 pairwise:1 theoretically:2 indeed:2 roughly:1 edml:76 globally:1 inspired:1 xti:1 cpu:1 considering:1 solver:1 revision:1 provided:2 discover:1 moreover:7 estimating:1 underlying:1 what:7 finding:5 suite:2 guarantee:2 certainty:1 pseudo:1 every:2 concave:3 grant:1 appear:1 producing:1 bertsekas:1 positive:1 before:1 understood:1 local:1 aggregating:1 consequence:2 mach:1 approximately:1 challenging:1 limited:2 range:1 directed:2 unique:13 acknowledgment:1 digit:2 procedure:11 empirical:3 java:2 aychoi:1 revealing:1 boyd:1 prentice:1 context:2 applying:1 optimize:4 equivalent:1 map:3 lagrangian:1 missing:1 convex:15 formulate:1 identifying:2 immediately:4 insight:1 importantly:2 deriving:2 vandenberghe:1 notion:2 fx:6 increment:1 coordinate:1 suppose:2 exact:2 programming:1 domingo:1 satisfying:2 utilized:1 jirousek:1 decrease:1 dempster:1 convexity:1 tacitly:1 solving:2 efficiency:1 derivation:2 describe:1 effective:1 artificial:2 tell:1 kevin:1 whose:2 heuristic:1 supplementary:3 solve:2 valued:2 say:1 posed:1 otherwise:1 statistic:2 abstractly:1 itself:1 laird:1 advantage:3 sequence:1 propose:3 relevant:1 iff:2 f10:1 inducing:1 normalize:1 los:1 parent:4 convergence:4 optimum:12 produce:4 converges:1 derive:4 ac:3 fixing:2 lauritzen:1 c:3 implies:1 subsequently:1 settle:1 require:2 transparent:1 suffices:1 strictly:1 practically:1 considered:2 hall:1 claypool:1 seed:1 claim:2 purpose:1 estimation:20 vice:1 weighted:1 mit:2 clearly:1 rki:1 always:1 csp:1 derived:4 focus:1 likelihood:17 greatly:1 contrast:2 besag:1 cg:12 baseline:1 am:2 inference:17 mrfs:1 typically:4 initially:1 koller:1 samiam:1 semantics:6 provably:1 denoted:1 constrained:8 special:2 seg2:2 equal:1 once:1 field:1 having:1 identical:1 park:1 tabular:1 future:1 intell:1 murphy:1 pietra:2 connects:1 statistician:1 ab:7 friedman:1 evaluation:4 asserted:2 damped:1 chain:6 amenable:1 edge:1 capable:1 arthur:3 xy:1 incomplete:5 theoretical:2 uncertain:1 instance:8 soft:11 maximization:1 recessive:1 loopy:1 lattice:1 entry:1 cedar:1 uniform:1 characterize:1 optimally:1 combined:1 systematic:1 probabilistic:4 invoke:1 synthesis:1 concrete:1 possibly:1 f5:2 brent:1 leading:2 dimitri:1 bfgs:22 bold:2 satisfy:1 notable:1 performed:3 helped:1 view:4 later:1 root:1 observing:1 closed:1 start:6 bayes:2 parallel:2 minimize:1 il:2 efficiently:4 yield:4 correspond:2 bayesian:29 identification:1 handwritten:1 vincent:1 multiplying:1 ary:1 footnote:2 icg:2 definition:2 james:1 proof:2 di:18 rbm:1 jacobi:1 dataset:9 popular:1 improves:2 segmentation:1 appears:1 originally:5 follow:1 formulation:4 evaluated:1 though:1 just:2 xa:28 until:5 lent:1 propagation:1 defines:1 lowd:1 quality:2 ipf:4 concept:2 hence:6 illustrated:1 mallet:3 complete:19 performs:1 interface:1 reasoning:1 image:1 invoked:1 recently:3 common:2 specialized:1 empirically:4 apache:3 conditioning:1 association:1 lieven:1 refer:1 versa:1 cambridge:1 vanilla:1 trivially:1 grid:2 access:1 longer:1 patrick:1 dominant:1 posterior:1 chan:1 perspective:6 optimizing:4 optimizes:3 certain:5 n00014:1 meta:3 binary:1 onr:1 exploited:1 morgan:1 additional:1 somewhat:2 employed:1 converge:3 monotonically:2 stephen:2 multiple:1 cxa:5 reduces:2 x10:1 seidel:1 faster:4 feasibility:10 impact:1 underlies:2 expectation:1 iteration:16 sometimes:3 represent:1 achieved:1 whereas:2 want:2 publisher:1 promedas:1 strict:1 subject:6 khaled:3 induced:2 undirected:6 facilitates:2 recording:3 contrary:1 lafferty:1 variety:1 xj:1 iterate:2 simplifies:1 angeles:1 six:1 gb:1 linkage:1 effort:1 nine:1 generally:2 useful:1 refaat:4 generate:1 http:3 per:1 shall:3 key:1 thereafter:1 four:1 threshold:2 yit:1 nocedal:1 ram:1 sum:5 run:3 parameterized:1 letter:4 i5:1 procedurally:2 place:1 draw:1 appendix:3 ki:1 layer:1 guaranteed:2 followed:1 convergent:4 strength:4 constraint:15 precisely:1 x2:1 phrased:1 ucla:2 speed:4 relatively:1 department:1 structured:2 cui:5 conjugate:6 terminates:1 em:8 restricted:1 pr:11 taken:2 subfunctions:2 equation:23 previously:2 hei:1 discus:1 count:1 locus:4 end:1 available:1 apply:2 eight:1 appropriate:2 original:2 denotes:3 assumes:2 dirichlet:1 graphical:10 giving:1 invokes:1 society:1 objective:7 already:3 question:2 gradient:8 wrap:4 simulated:2 seven:1 trivial:1 assuming:3 relationship:1 providing:1 minimizing:1 relate:1 negative:6 rise:1 design:2 implementation:4 anal:1 boltzmann:1 upper:2 observation:6 markov:27 datasets:4 benchmark:2 descent:1 immediate:5 defining:1 situation:1 relational:1 arbitrary:1 cast:2 required:2 specified:2 connection:3 california:1 engine:1 learned:2 trans:1 address:1 beyond:1 suggested:1 pattern:1 royal:1 memory:2 belief:3 subfunction:1 event:1 difficulty:1 hybrid:1 rely:1 representing:1 library:4 naive:2 nir:1 review:2 prior:2 literature:1 relative:1 stanislav:1 lacking:1 appealed:1 lecture:1 highlight:1 interesting:4 proportional:3 proven:1 var:2 consistent:1 rubin:1 principle:1 repeat:1 supported:1 keeping:1 free:1 tsitsiklis:1 formal:2 aij:2 fall:1 face:2 characterizing:2 ghz:1 distributed:1 default:2 xn:4 evaluating:1 commonly:2 collection:1 simplified:1 tighten:1 approximate:4 obtains:1 logic:1 ml:1 global:1 instantiation:6 uai:3 conclude:1 xi:30 alternatively:1 continuous:4 iterative:10 search:4 table:2 learn:3 necessarily:4 domain:3 x1:4 xu:4 intel:1 cxi:5 tl:3 sub:34 exceeding:1 theorem:13 choi:3 minute:3 xt:4 specific:1 admits:1 evidence:8 intractable:1 drew:1 magnitude:2 cx:6 simply:1 univariate:1 lbfgs:1 jacm:1 partially:1 pedro:1 corresponds:3 satisfies:3 goal:3 identity:2 viewed:1 feasible:17 hard:1 change:1 specifically:1 typical:1 except:1 experimental:1 gauss:1 vote:3 zg:1 evaluate:4 d1:1 della:2 |
4,380 | 4,964 | Projecting Ising Model Parameters for Fast Mixing
Xianghang Liu
NICTA, The University of New South Wales
[email protected]
Justin Domke
NICTA, The Australian National University
[email protected]
Abstract
Inference in general Ising models is difficult, due to high treewidth making treebased algorithms intractable. Moreover, when interactions are strong, Gibbs sampling may take exponential time to converge to the stationary distribution. We
present an algorithm to project Ising model parameters onto a parameter set that
is guaranteed to be fast mixing, under several divergences. We find that Gibbs
sampling using the projected parameters is more accurate than with the original
parameters when interaction strengths are strong and when limited time is available for sampling.
1 Introduction
High-treewidth graphical models typically yield distributions where exact inference is intractable.
To cope with this, one often makes an approximation based on a tractable model. For example,
given some intractable distribution q, mean-field inference [14] attempts to minimize KL(p||q) over
p ? TRACT, where TRACT is the set of fully-factorized distributions. Similarly, structured meanfield minimizes the KL-divergence, but allows TRACT to be the set of distributions that obey some
tree [16] or a non-overlapping clustered [20] structure. In different ways, loopy belief propagation
[21] and tree-reweighted belief propagation [19] also make use of tree-based approximations, while
Globerson and Jaakkola [6] provide an approximate inference method based on exact inference in
planar graphs with zero field.
In this paper, we explore an alternative notion of a ?tractable? model. These are ?fast mixing?
models, or distributions that, while they may be high-treewidth, have parameter-space conditions
guaranteeing that Gibbs sampling will quickly converge to the stationary distribution. While the
precise form of the parameter space conditions is slightly technical (Sections 2-3), informally, it is
simply that interaction strengths between neighboring variables are not too strong.
In the context of the Ising model, we attempt to use these models in the most basic way possible? by
taking an arbitrary (slow-mixing) set of parameters, projecting onto the fast-mixing set, using four
different divergences. First, we show how to project in the Euclidean norm, by iteratively thresholding a singular value decomposition (Theorem 7). Secondly, we experiment with projecting using
the ?zero-avoiding? divergence KL(q||p). Since this requires taking (intractable) expectations with
respect to q, it is of only theoretical interest. Third, we suggest a novel ?piecewise? approximation
of the KL divergence, where one drops edges from both q and p until a low-treewidth graph remains
where the exact KL divergence can be calculated. Experimentally, this does not perform as well as
the true KL-divergence, but is easy to evaluate. Fourth, we consider the ?zero forcing? divergence
KL(q||p). Since this requires expectations with respect to p, which is constrained to be fast-mixing,
it can be approximated by Gibbs sampling, and the divergence can be minimized through stochastic
approximation. This can be seen as a generalization of mean-field where the set of approximating
distributions is expanded from fully-factorized to fast-mixing.
1
2 Background
The literature on mixing times in Markov chains is extensive, including a recent textbook [10]. The
presentation in the rest of this section is based on that of Dyer et al. [4].
Given a distribution p(x), one will often wish to draw samples from it. While in certain cases
(e.g. the Normal distribution) one can obtain exact samples, for Markov random fields (MRFs), one
must generally resort to iterative Markov chain Monte Carlo (MCMC) methods that obtain a sample
asymptotically. In this paper, we consider the classic Gibbs sampling method [5], where one starts
with some configuration x, and repeatedly picks a node i, and samples xi from p(xi |x?i ). Under
mild conditions, this can be shown to sample from a distribution that converges to p as t ? ?.
It is common to use more sophisticated methods such as block Gibbs sampling, the Swendsen-Wang
algorithm [18], or tree sampling [7]. In principle, each algorithm could have unique parameter-space
conditions under which it is fast mixing. Here, we focus on the univariate case for simplicity and
because fast mixing of univariate Gibbs is sufficient for fast mixing of some other methods [13].
Definition 1. Given two finite distributions p and q, the total variation distance || ? ||T V is
1!
||p(X) ? q(X)||T V =
|p(X = x) ? q(X = x)|.
2 x
We need a property of a distribution that can guarantee fast mixing. The dependency Rij of xi on
xj is defined by considering two configurations x and x" , and measuring how much the conditional
distribution of xi can vary when xk = x"k for all k $= j.
Definition 2. Given a distribution p, the dependency matrix R is defined by
Rij =
max
x,x! :x?j =x!?j
||p(Xi |x?i ) ? p(Xi |x"?i )||T V .
Given some threshold !, the mixing time is the number of iterations needed to guarantee that the
total variation distance of the Gibbs chain to the stationary distribution is less than !.
Definition 3. Suppose that {X t } denotes the sequence of random variables corresponding to running Gibbs sampling on some distribution p. The mixing time ? (!) is the minimum time t such that
the total variation distance between X t and the stationary distribution is at most !. That is,
? (!) = min{t : d(t) < !},
d(t) = max ||P(X t |X 0 = x) ? p(X)||T V .
x
Unfortunately, the mixing time can be extremely long, which makes the use of Gibbs sampling
delicate in practice. For example, for the two-dimensional Ising model with zero field and uniform
interactions, it is known that mixing time is polynomial (in the size of the grid) when the interaction
strengths are below a threshold ?c , and exponential for stronger interactions [11]. For more general
distributions, such tight bounds are not generally known, but one can still derive sufficient conditions
for fast mixing. The main result we will use is the following [8].
Theorem 4. Consider the dependency matrix R corresponding to some distribution p(X1 , ..., Xn ).
For Gibbs sampling with random updates, if ||R||2 < 1, the mixing time is bounded by
? (!) ?
"n#
n
.
ln
1 ? ||R||2
!
Roughly speaking, if the spectral norm (maximum singular value) of R is less than one, rapid mixing
will occur. A similar result holds in the case of systematic scan updates [4, 8].
Some of the classic ways of establishing fast mixing can be seen as special cases of this. For
example, the Dobrushin
$ criterion is that ||R||1 < 1, which can be easier to verify in many cases,
since ||R||1 = maxj i |Rij | does not require the computation of singular values. However, for
symmetric matrices, it can be shown that ||R||2 ? ||R||1 , meaning the above result is tighter.
2
3 Mixing Time Bounds
For variables xi ? {?1, +1}, an Ising model is of the form
?
?
#
#
?i xi ? A(?, ?)? ,
p(x) = exp ?
?ij xi xj +
i
i,j
where ?ij is the interaction strength between variables i and j, ?i is the ?field? for variable i,
and A ensures normalization. This can be seen as a member of the exponential family p(x) =
exp (? ? f (x) ? A(?)) , where f (x) = {xi xj ?(i, j)} ? {xi ?i} and ? contains both ? and ?.
Lemma 5. For an Ising model, the dependency matrix is bounded by
Rij ? tanh |?ij | ? |?ij |
Hayes [8] proves this for the case of constant ? and zero-field, but simple modifications to the proof
can give this result.
Thus, to summarize, an Ising model can be guaranteed to be fast mixing if the spectral norm of the
absolute value of interactions terms is less than one.
4 Projection
In this section, we imagine that we have some set of parameters ?, not necessarily fast mixing, and
would like to obtain another set of parameters ? which are as close as possible to ?, but guaranteed
to be fast mixing. This section derives a projection in the Euclidean norm, while Section 5 will build
on this to consider other divergence measures.
We will use the following standard result that states that given a matrix A, the closest matrix with a
maximum spectral norm can be obtained by thresholding the singular values.
Theorem 6. If A has a singular value decomposition A = U SV T , and ||?||F denotes the Frobenius
!
norm, then B = arg min ||A ? B||F can be obtained as B = U S " V T , where Sii = min(Sii , c2 ).
B:||B||2 ?c
We denote this projection by B = ?c [A]. This is close to providing an algorithm for obtaining the
closest set of Ising model parameters that obey a given spectral norm constraint. However, there are
two issues. First, in general, even if A is sparse, the projected matrix B will be dense, meaning that
projecting will destroy a sparse graph structure. Second, this result constrains the spectral norm of
B itself, rather than R = |B|, which is what needs to be controlled. The theorem below provides a
dual method that fixed these issues.
Here, we take some matrix Z that corresponds to the graph structure, by setting Zij = 0 if (i, j) is
an edge, and Zij = 1 otherwise. Then, enforcing that B obeys the graph structure is equivalent to
enforcing that Zij Bij = 0 for all (i, j). Thus, finding the closest set of parameters B is equivalent
to solving
min
||A ? B||F subject to ||D||2 ? c, Zij Dij = 0, D = |B|.
(1)
B,D
We find it convenient to solve this minimization by performing some manipulations, and deriving a
dual. The proof of this theorem is provided in the appendix. To accomplish the maximization of g
over M and ?, we use LBFGS-B [1], with bound constraints used to enforce that M ? 0.
&
The following theorem uses the ?triple dot product? notation of A ? B ? C = ij Aij Bij Cij .
Theorem 7. Define R = |A|. The minimization in Eq. 1 is equivalent to the problem of
maxM?0,? g(?, M ), where the objective and gradient of g are, for D(?, M ) = ?c [R+M ??'Z],
g(?, M ) =
1
||D(?, M ) ? R||2F + ? ? Z ? D(?, M )
2
dg
= Z ' D(?, M )
d?
dg
= D(?, M ).
dM
3
(2)
(3)
(4)
5 Divergences
Again, we would like to find a parameter vector ? that is close to a given vector ?, but is guaranteed
to be fast mixing, but with several notions of ?closeness? that vary in terms of accuracy and computational convenience. Formally, if ? is the set of parameters that we can guarantee to be fast mixing,
and D(?, ?) is a divergence between ? and ?, then we would like to solve
arg min D(?, ?).
(5)
???
As we will see, in selecting D there appears to be something of a trade-off between the quality of
the approximation, and the ease of computing the projection in Eq. 5.
In this section, we work with the generic exponential family representation
p(x; ?) = exp(? ? f (x) ? A(?)).
We use ? to denote the mean value of f . By a standard result, this is equal to the gradient of A, i.e.
?(?) =
!
x
p(x; ?)f (x) = ?A(?).
5.1 Euclidean Distance
The simplest divergence is simply the l2 distance between the parameter vectors, D(?, ?) = ||? ?
?||2 . For the Ising model, Theorem 7 provides a method to compute the projection arg min??? ||??
?||2 . While simple, this has no obvious probabilistic interpretation, and other divergences perform
better in the experiments below.
However, it also forms the basis of our projected gradient descent strategy for computing the projection in Eq. 5 under more general divergences D. Specifically, we will do this by iterating
d
1. ? " ? ? ? ? d?
D(?, ?)
2. ? ? arg min??? ||? " ? ?||2
for some step-size ?. In some cases, dD/d? can be calculated exactly, and this is simply projected
gradient descent. In other cases, one needs to estimate dD/d? by sampling from ?. As discussed
below, we do this by maintaining a ?pool? of samples. In each iteration, a few Markov chain steps
are applied with the current parameters, and then the gradient is estimated using them. Since the
gradients estimated at each time-step are dependent, this can be seen as an instance of Ergodic Mirror
Descent [3]. This guarantees convergence if the number of Markov chain steps, and the step-size ?
are both functions of the total number of optimization iterations.
5.2 KL-Divergence
Perhaps the most natural divergence to use would be the ?inclusive? KL-divergence
D(?, ?) = KL(?||?) =
!
p(x; ?) log
x
p(x; ?)
.
p(x; ?)
(6)
This has the ?zero-avoiding? property [12] that ? will tend to assign some probability to all configurations that ? assigns nonzero probability to. It is easy to show that the derivative is
dD(?, ?)
= ?(?) ? ?(?),
d?
(7)
where ?? = E? [f (X)]. Unfortunately, this requires inference with respect to both the parameter
vectors ? and ?. Since ? will be enforced to be fast-mixing during optimization, one could approximate ?(?) by sampling. However, ? is presumed to be slow-mixing, making ?(?) difficult to
compute. Thus, this divergence is only practical on low-treewidth ?toy? graphs.
4
5.3 Piecewise KL-Divergences
Inspired by the piecewise likelihood [17] and likelihood approximations based on mixtures of trees
[15], we seek tractable approximations of the KL-divergence based on tractable subgraphs. Our
motivation is the the following: if ? and ? define the same distribution, then if a certain set of edges
are removed from both, they should continue to define the same distribution1. Thus, given some
graph T , we define the ?projection? ?(T ) onto the tree such by setting all edge parameters to zero if
not part of T . Then, given a set of graphs T , the piecewise KL-divergence is
D(?, ?) = max KL(?(T )||?(T )).
T
Computing the derivative of this divergence is not hard? one simply computes the KL-divergence
for each graph, and uses the gradient as in Eq. 7 for the maximizing graph.
There is some flexibility of selecting the graphs T . In the simplest case, one could simply select a
set of trees (assuring that each edge is covered by one tree), which makes it easy to compute the KLdivergence on each tree using the sum-product algorithm. We will also experiment with selecting
low-treewidth graphs, where exact inference can take place using the junction tree algorithm.
5.4 Reversed KL-Divergence
We also consider the ?zero-forcing? KL-divergence
D(?, ?) = KL(?||?) =
!
x
p(x; ?) log
p(x; ?)
.
p(x; ?)
Theorem 8. The divergence D(?, ?) = KL(?||?) has the gradient
!
d
D(?, ?) =
p(x; ?)(? ? ?) ? f (x) (f (x) ? ?(?)) .
d?
x
Arguably, using this divergence is inferior to the ?zero-avoiding? KL-divergence. For example,
since the parameters ? may fail to put significant probability at configurations where ? does, using
importance sampling to reweight samples from ? to estimate expectations with respect to ? could
have high variance Further, it can be non-convex with respect to ?. Nevertheless, it often work
well in practice. Minimizing this divergence under the constraint that the dependency matrix R
corresponding to ? have a limited spectral norm is closely related to naive mean-field, which can be
seen as a degenerate case where one constrains R to have zero norm.
This is easier to work with than the ?zero-avoiding? KL-divergence in Eq. 6 since it involves taking
expectations with respect to ?, rather than ?: since ? is enforced to be fast-mixing, these expectations can be approximated by sampling. Specifically, suppose that one has generated a set of
samples x1 , ..., xK using the current parameters ?. Then, one can first approximate the marginals
"K
1
k
by ?
?=K
k=1 f (x ), and then approximate the gradient by
g? =
K
$#
$
1 !#
(? ? ?) ? f (xk ) f (xk ) ? ?
? .
K
(8)
k=1
It is a standard result that if two estimators are unbiased and independent, the product of the two
estimators will also be unbiased. Thus, if one used separate sets of perfect samples to estimate ?
?
and g?, then g? would be an unbiased estimator of dD/d?. In practice, of course, we generate the
samples by Gibbs sampling, so they are not quite perfect. We find in practice that using the same set
of samples twice makes makes little difference, and do so in the experiments.
1
Technically, here, we assume that the exponential family is minimal. However, in the case of an overcomplete exponential family, enforcing this will simply ensure that ? and ? use the same reparameterization.
5
0.35
0.25
0.35
0.3
0.2
0.15
0.25
0.15
0.1
0.05
0.05
0.4
0.35
0.3
0.25
0.5
1
1.5
2
2.5
Interaction Strength
3
3.5
0
0
4
Edge Density = 0.3, Attractive
Loopy BP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
KL(?||?)
KL(?||?)
0.4
0.35
0.3
0.2
0.15
0.25
0.05
0.05
1
1.5
2
2.5
Interaction Strength
3
3.5
0
0
4
1
1.5
2
2.5
Interaction Strength
3
3.5
4
3
3.5
4
Edge Density = 0.3, Mixed
Loopy BP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
KL(?||?)
KL(?||?)
0.15
0.1
0.5
0.5
0.2
0.1
0
0
Grid, Attractive
LBP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
Piecewise KL(?||?) (TW 2)
KL(?||?)
KL(?||?)
0.2
0.1
0
0
Marginal Error
0.4
Marginal Error
Marginal Error
0.3
Grid, Mixed
LBP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
Piecewise KL(?||?) (TW 2)
KL(?||?)
KL(?||?)
Marginal Error
0.4
0.5
1
1.5
2
2.5
Interaction Strength
Figure 1: The mean error of estimated univariate marginals on 8x8 grids (top row) and low-density
random graphs (bottom row), comparing 30k iterations of Gibbs sampling after projection to variational methods. To approximate the computational effort of projection (Table 1), sampling on the
original parameters with 250k iterations is also included as a lower curve. (Full results in appendix.)
6 Experiments
Our experimental evaluation follows that of Hazan and Shashua [9] in evaluating the accuracy of
the methods using the Ising model in various configurations. In the experiments, we approximate
randomly generated Ising models with rapid-mixing distributions using the projection algorithms
described previously. Then, the marginals of rapid-mixing approximate distribution are compared
against those of the target distributions by running a Gibbs chain on each. We calculate the mean
absolute distance of the marginals as the accuracy measure, with the marginals computed via the
exact junction-tree algorithm.
We evaluate projecting under the Euclidean distance (Section 5.1), the piecewise divergence (Section
5.3), and the zero-forcing KL-divergence KL(?||?) (Section 5.4). On small graphs, it is possible to
minimize the zero-avoiding KL-divergence KL(?||?) by computing marginals using the junctiontree algorithm. However, as minimizing this KL-divergence leads to exact marginal estimates, it
doesn?t provide a useful measure of marginal accuracy. Our methods are compared with four other
inference algorithms, namely loopy belief-propagation (LBP), Tree-reweighted belief-propagation
(TRW), Naive mean-field (MF), and Gibbs sampling on the original parameters.
LBP, MF and TRW are among the most widely applied variational methods for approximate inference. The MF algorithm uses a fully factorized distribution as the tractable family, and can be viewed
as an extreme case of minimizing the zero forcing KL-divergence KL(?||?) under the constraint
of zero spectral norm. The tractable family that it uses guarantees ?instant? mixing but is much
more restrictive. Theoretically, Gibbs sampling on the original parameters will produce highly accurate marginals if run long enough. However, this can take exponentially long and convergence is
generally hard to diagnose [2]. In contrast, Gibbs sampling on the rapid-mixing approximation is
guaranteed to converge rapidly but will result in less accurate marginals asymptotically. Thus, we
also include time-accuracy comparisons between these two strategies in the experiments.
6
Grid, Strength 1.5
Grid, Strength 3
Random Graph, Strength 3.
Gibbs Steps
SVDs
Gibbs Steps
SVDs
Gibbs Steps
SVDs
30,000 Gibbs steps
30k / 0.17s
30k / 0.17s
30k / 0.04s
250,000 Gibbs steps
250k / 1.4s
250k / 1.4s
250k / 0.33s
Euclidean Projection
22 / 0.04s
78 / 0.15s
17 / .0002s
Piecewise-1 Projection
322 / 0.61s
547 / 1.0s
408 / 0.047s
KL Projection
30k / 0.17s 265 / 0.55s 30k / 0.17s 471 / 0.94s 30k / 0.04s 300 / 0.037s
Table 1: Running Times on various attractive graphs, showing the number of Gibbs passes and
Singular Value Decompositions, as well as the amount of computation time. The random graph is
based on an edge density of 0.7. Mean-Field, Loopy BP, and TRW take less than 0.01s.
6.1 Configurations
Two types of graph topologies are used: two-dimensional 8 ? 8 grids and random graphs with 10
nodes. Each edge is independently present with probability pe ? {0.3, 0.5, 0.7}. Node parameters ?i
are uniformly drawn from unif(?dn , dn ) and we fix the field strength to dn = 1.0. Edge parameters
?ij are uniformly drawn from unif(?de , de ) or unif(0, de ) to obtain mixed or attractive interactions
respectively. We generate graphs with different interaction strength de = 0, 0.5, . . . , 4. All results
are averaged over 50 random trials.
To calculate piecewise divergences, it remains to specify the set of subgraphs T . It can be any
tractable subgraph of the original distribution. For the grids, one straightforward choice is to use the
horizontal and the vertical chains as subgraphs. We also test with chains of treewidth 2. For random
graphs, we use the sets of random spanning trees which can cover every edge of the original graphs
as the set of subgraphs.
A stochastic gradient descent algorithm is applied to minimize the zero forcing KL-divergence
KL(?||?). In this algorithm, a ?pool? of samples is repeatedly used to estimate gradients as in
Eq. 8. After each parameter update, each sample is updated by a single Gibbs step, consisting of
one pass over all variables. The performance of this algorithm can be affected by several parameters,
including the gradient search step size, the size of the sample pool, the number of Gibbs updates,
and the number of total iterations. (This algorithm can be seen as an instance of Ergodic Mirror
Descent [3].) Without intensive tuning of these parameters, we choose a constant step size of 0.1,
sample pool size of 500 and 60 total iterations, which performed reasonably well in practice.
For each original or approximate distribution, a single chain of Gibbs sampling is run on the final
parameters, and marginals are estimated from the samples drawn. Each Gibbs iteration is one pass
of systematical scan over the variables in fixed order. Note that this does not take into account the
computational effort deployed during projection, which ranges from 30,000 total Gibbs iterations
with repeated Euclidean projection (KL(?||?)) to none at all (Original parameters). It has been
our experience that more aggressive parameters can lead to this procedure being more accurate than
Gibbs in a comparison of total computational effort, but such a scheduling tends to also reduce the
accuracy of the final parameters, making results more difficult to interpret.
In Section 3.2, we show that for Ising models, a sufficient condition for rapid-mixing is the spectral
norm of pairwise weight matrix is less than 1.0. However, we find in practice using a spectral norm
bound of 2.5 instead of 1.0 can still preserve the rapid-mixing property and gives better approximation to the original distributions. (See Section 7 for a discussion.)
7 Discussion
Inference in high-treewidth graphical models is intractable, which has motivated several classes
of approximations based on tractable families. In this paper, we have proposed a new notion of
?tractability?, insisting not that a graph has a fast algorithm for exact inference, but only that it obeys
parameter-space conditions ensuring that Gibbs sampling will converge rapidly to the stationary
distribution. For the case of Ising models, we use a simple condition that can guarantee rapid mixing,
namely that the spectral norm of the matrix of interaction strengths is less than one.
7
Grid, Interaction Strength 4.0, Mixed
0.4
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0 0
10
1
10
2
3
4
10
10
10
Number of samples (log scale)
0 0
10
5
10
Edge Density = 0.3, Interaction Strength 3.0, Mixed
0.3
0.1
0.1
1
2
3
4
10
10
10
Number of samples (log scale)
4
5
10
LBP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
KL(?||?)
KL(?||?)
0.3
0.2
10
3
0.4
0.2
0 0
10
2
10
10
10
Number of samples (log scale)
0.5
Marginal error
Marginal error
0.4
1
10
Edge Density = 0.3, Interaction Strength 3.0, Attractive
LBP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
KL(?||?)
KL(?||?)
0.5
LBP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
Piecewise KL(?||?) (TW 2)
KL(?||?)
KL(?||?)
0.5
Marginal error
0.5
Marginal error
Grid, Interaction Strength 4.0, Attractive
LBP
TRW
Mean?Field
Original Parameters
Euclidean
Piecewise KL(?||?) (TW 1)
Piecewise KL(?||?) (TW 2)
KL(?||?)
KL(?||?)
0 0
10
5
10
1
10
2
3
4
10
10
10
Number of samples (log scale)
5
10
Figure 2: Example plots of the accuracy of obtained marginals vs. the number of samples. Top:
Grid graphs. Bottom: Low-Density Random graphs. (Full results in appendix.)
Given an intractable set of parameters, we consider using this approximate family by ?projecting?
the intractable distribution onto it under several divergences. First, we consider the Euclidean distance of parameters, and derive a dual algorithm to solve the projection, based on an iterative thresholding of the singular value decomposition. Next, we extend this to more probabilistic divergences.
Firstly, we consider a novel ?piecewise? divergence, based on computing the exact KL-divergnce on
several low-treewidth subgraphs. Secondly, we consider projecting onto the KL-divergence. This requires a stochastic approximation approach where one repeatedly generates samples from the model,
and projects in the Euclidean norm after taking a gradient step.
We compare experimentally to Gibbs sampling on the original parameters, along with several standard variational methods. The proposed methods are more accurate than variational approximations.
Given enough time, Gibbs sampling using the original parameters will always be more accurate, but
with finite time, projecting onto the fast-mixing set to generally gives better results.
Future work might extend this approach to general Markov random fields. This will require two
technical challenges. First, one must find a bound on the dependency matrix for general MRFs,
and secondly, an algorithm is needed to project onto the fast-mixing set defined by this bound.
Fast-mixing distributions might also be used for learning. E.g., if one is doing maximum likelihood learning using MCMC to estimate the likelihood gradient, it would be natural to constrain the
parameters to a fast mixing set.
One weakness of the proposed approach is the apparent looseness of the spectral norm bound. For
the two dimensional Ising model with no univariate
terms, and a constant interaction strength ?,
?
there is a well-known threshold ?c = 12 ln(1 + 2) ? .4407, obtained using more advanced techniques than the spectral norm [11]. Roughly, for ? < ?c , mixing is known to occur quickly (polynomial in the grid size) while for ? > ?c , mixing is exponential. On the other hand, the spectral bound
norm will be equal to one for ? = .25, meaning the bound is too conservative in this case by a factor
of ?c /.25 ? 1.76. A tighter bound on when rapid mixing will occur would be more informative.
8
References
[1] Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm
for bound constrained optimization. SIAM J. Sci. Comput., 16(5):1190?1208, 1995.
[2] Mary Kathryn Cowles and Bradley P. Carlin. Markov chain monte carlo convergence diagnostics: A comparative review. Journal of the American Statistical Association, 91:883?904,
1996.
[3] John C. Duchi, Alekh Agarwal, Mikael Johansson, and Michael I. Jordan. Ergodic mirror
descent. SIAM Journal on Optimization, 22(4):1549?1578, 2012.
[4] Martin E. Dyer, Leslie Ann Goldberg, and Mark Jerrum. Matrix norms and rapid mixing for
spin systems. Ann. Appl. Probab., 19:71?107, 2009.
[5] Stuart Geman and Donald Geman. Stochastic relaxation, gibbs distributions, and the bayesian
restoration of images. IEEE Trans. Pattern Anal. Mach. Intell., 6(6):721?741, 1984.
[6] Amir Globerson and Tommi Jaakkola. Approximate inference using planar graph decomposition. In NIPS, pages 473?480, 2006.
[7] Firas Hamze and Nando de Freitas. From fields to trees. In UAI, 2004.
[8] Thomas P. Hayes. A simple condition implying rapid mixing of single-site dynamics on spin
systems. In FOCS, pages 39?46, 2006.
[9] Tamir Hazan and Amnon Shashua. Convergent message-passing algorithms for inference over
general graphs with convex free energies. In UAI, pages 264?273, 2008.
[10] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times.
American Mathematical Society, 2006.
[11] Eyal Lubetzky and Allan Sly. Critical Ising on the square lattice mixes in polynomial time.
Commun. Math. Phys., 313(3):815?836, 2012.
[12] Thomas Minka. Divergence measures and message passing. Technical report, 2005.
[13] Yuval Peres and Peter Winkler. Can extra updates delay mixing? arXiv/1112.0603, 2011.
[14] C. Peterson and J. R. Anderson. A mean field theory learning algorithm for neural networks.
Complex Systems, 1:995?1019, 1987.
[15] Patrick Pletscher, Cheng S. Ong, and Joachim M. Buhmann. Spanning Tree Approximations
for Conditional Random Fields. In AISTATS, 2009.
[16] Lawrence K. Saul and Michael I. Jordan. Exploiting tractable substructures in intractable
networks. In NIPS, pages 486?492, 1995.
[17] Charles Sutton and Andrew Mccallum. Piecewise training for structured prediction. Machine
Learning, 77:165?194, 2009.
[18] Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in monte carlo
simulations. Phys. Rev. Lett., 58:86?88, Jan 1987.
[19] Martin Wainwright, Tommi Jaakkola, and Alan Willsky. A new class of upper bounds on the
log partition function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005.
[20] Eric P. Xing, Michael I. Jordan, and Stuart Russell. A generalized mean field algorithm for
variational inference in exponential families. In UAI, 2003.
[21] Jonathan Yedidia, William Freeman, and Yair Weiss. Constructing free energy approximations
and generalized belief propagation algorithms. IEEE Transactions on Information Theory,
51:2282?2312, 2005.
9
| 4964 |@word mild:1 trial:1 polynomial:3 norm:19 stronger:1 johansson:1 unif:3 seek:1 simulation:1 decomposition:5 pick:1 liu:2 configuration:6 contains:1 zij:4 selecting:3 freitas:1 bradley:1 current:2 com:2 comparing:1 must:2 john:1 partition:1 informative:1 drop:1 plot:1 update:5 v:1 stationary:5 implying:1 amir:1 xk:4 mccallum:1 provides:2 math:1 node:3 firstly:1 mathematical:1 dn:3 sii:2 c2:1 along:1 focs:1 wale:1 pairwise:1 theoretically:1 allan:1 presumed:1 rapid:10 roughly:2 inspired:1 freeman:1 byrd:1 little:1 considering:1 project:4 provided:1 moreover:1 bounded:2 notation:1 factorized:3 what:1 minimizes:1 textbook:1 finding:1 kldivergence:1 guarantee:6 every:1 exactly:1 arguably:1 tends:1 sutton:1 mach:1 establishing:1 might:2 twice:1 au:2 appl:1 ease:1 limited:3 range:1 obeys:2 averaged:1 unique:1 globerson:2 practical:1 practice:6 block:1 procedure:1 jan:1 projection:16 convenient:1 donald:1 suggest:1 onto:7 close:3 convenience:1 scheduling:1 put:1 context:1 equivalent:3 maximizing:1 straightforward:1 independently:1 convex:2 ergodic:3 simplicity:1 assigns:1 subgraphs:5 estimator:3 deriving:1 reparameterization:1 classic:2 notion:3 variation:3 updated:1 imagine:1 suppose:2 target:1 assuring:1 exact:9 us:4 kathryn:1 goldberg:1 approximated:2 ising:16 geman:2 bottom:2 wang:2 rij:4 calculate:2 svds:3 ensures:1 trade:1 removed:1 russell:1 constrains:2 ong:1 dynamic:2 tight:1 solving:1 technically:1 eric:1 basis:1 various:2 fast:24 monte:3 quite:1 apparent:1 widely:1 solve:3 otherwise:1 jerrum:1 winkler:1 itself:1 final:2 sequence:1 interaction:20 product:3 neighboring:1 rapidly:2 mixing:47 flexibility:1 degenerate:1 subgraph:1 frobenius:1 exploiting:1 convergence:3 produce:1 comparative:1 tract:3 guaranteeing:1 converges:1 perfect:2 derive:2 andrew:1 ij:6 eq:6 strong:3 involves:1 treewidth:9 australian:1 tommi:2 closely:1 stochastic:4 nando:1 require:2 assign:1 fix:1 clustered:1 generalization:1 tighter:2 secondly:3 hold:1 swendsen:2 normal:1 exp:3 lawrence:1 vary:2 tanh:1 maxm:1 minimization:2 always:1 rather:2 jaakkola:3 focus:1 joachim:1 likelihood:4 contrast:1 inference:14 mrfs:2 dependent:1 typically:1 arg:4 issue:2 dual:3 among:1 constrained:2 special:1 marginal:10 field:24 equal:2 sampling:25 stuart:2 future:1 minimized:1 report:1 piecewise:21 richard:1 few:1 randomly:1 dg:2 preserve:1 national:1 divergence:43 intell:1 maxj:1 consisting:1 delicate:1 william:1 attempt:2 interest:1 message:2 highly:1 evaluation:1 weakness:1 mixture:1 extreme:1 diagnostics:1 chain:11 accurate:6 edge:13 experience:1 tree:15 euclidean:16 overcomplete:1 theoretical:1 minimal:1 instance:2 cover:1 measuring:1 restoration:1 maximization:1 loopy:5 tractability:1 leslie:1 lattice:1 uniform:1 delay:1 dij:1 firas:1 levin:1 too:2 dependency:6 sv:1 accomplish:1 density:7 siam:2 systematic:1 off:1 probabilistic:2 pool:4 michael:3 quickly:2 again:1 choose:1 resort:1 derivative:2 american:2 toy:1 account:1 aggressive:1 de:5 performed:1 diagnose:1 eyal:1 hazan:2 doing:1 shashua:2 start:1 xing:1 substructure:1 minimize:3 square:1 spin:2 accuracy:7 variance:1 yield:1 bayesian:1 none:1 carlo:3 lu:1 phys:2 definition:3 against:1 energy:2 minka:1 obvious:1 dm:1 proof:2 sophisticated:1 trw:11 appears:1 planar:2 specify:1 wei:1 anderson:1 sly:1 until:1 hand:1 sheng:1 horizontal:1 overlapping:1 propagation:5 quality:1 perhaps:1 mary:1 verify:1 true:1 unbiased:3 symmetric:1 iteratively:1 nonzero:1 reweighted:2 attractive:6 during:2 inferior:1 criterion:1 generalized:2 duchi:1 meaning:3 variational:5 image:1 novel:2 charles:1 common:1 exponentially:1 discussed:1 interpretation:1 extend:2 peihuang:1 marginals:10 interpret:1 association:1 significant:1 gibbs:33 tuning:1 grid:12 similarly:1 dot:1 alekh:1 patrick:1 something:1 closest:3 recent:1 systematical:1 commun:1 forcing:5 manipulation:1 certain:2 continue:1 jorge:1 seen:6 minimum:1 converge:4 full:2 mix:1 alan:1 technical:3 long:3 controlled:1 ensuring:1 prediction:1 basic:1 expectation:5 arxiv:1 iteration:9 normalization:1 agarwal:1 background:1 lbp:8 singular:7 jian:1 extra:1 rest:1 pass:1 south:1 subject:1 tend:1 member:1 jordan:3 hamze:1 easy:3 enough:2 xj:3 carlin:1 nonuniversal:1 topology:1 reduce:1 intensive:1 amnon:1 motivated:1 effort:3 peter:1 lubetzky:1 speaking:1 passing:2 repeatedly:3 generally:4 iterating:1 covered:1 informally:1 useful:1 amount:1 simplest:2 generate:2 estimated:4 affected:1 four:2 threshold:3 nevertheless:1 drawn:3 nocedal:1 destroy:1 graph:27 asymptotically:2 relaxation:1 sum:1 enforced:2 run:2 fourth:1 place:1 family:9 draw:1 appendix:3 bound:12 guaranteed:5 convergent:1 cheng:1 strength:19 occur:3 constraint:4 constrain:1 inclusive:1 bp:3 generates:1 min:7 extremely:1 performing:1 expanded:1 martin:2 structured:2 slightly:1 elizabeth:1 tw:12 rev:1 making:3 modification:1 projecting:8 ln:2 remains:2 previously:1 fail:1 needed:2 dyer:2 tractable:9 available:1 junction:2 yedidia:1 obey:2 spectral:13 enforce:1 generic:1 alternative:1 yair:1 xianghang:2 original:19 thomas:2 denotes:2 running:3 ensure:1 top:2 include:1 graphical:2 maintaining:1 instant:1 mikael:1 restrictive:1 prof:1 build:1 approximating:1 society:1 objective:1 strategy:2 gradient:14 distance:8 reversed:1 separate:1 sci:1 spanning:2 nicta:4 enforcing:3 willsky:1 providing:1 minimizing:3 difficult:3 unfortunately:2 cij:1 robert:1 reweight:1 anal:1 looseness:1 perform:2 upper:1 vertical:1 markov:8 finite:2 descent:6 peres:2 precise:1 arbitrary:1 david:1 namely:2 kl:62 extensive:1 nip:2 trans:1 justin:2 distribution1:1 below:4 pattern:1 summarize:1 challenge:1 including:2 max:3 memory:1 belief:5 wainwright:1 meanfield:1 critical:2 natural:2 buhmann:1 ciyou:1 advanced:1 zhu:1 pletscher:1 x8:1 naive:2 review:1 literature:1 l2:1 probab:1 fully:3 mixed:5 triple:1 sufficient:3 thresholding:3 principle:1 dd:4 row:2 course:1 free:2 wilmer:1 aij:1 saul:1 peterson:1 taking:4 absolute:2 sparse:2 curve:1 calculated:2 xn:1 evaluating:1 lett:1 tamir:1 computes:1 doesn:1 projected:4 cope:1 transaction:2 approximate:11 hayes:2 uai:3 xi:11 search:1 iterative:2 table:2 reasonably:1 obtaining:1 necessarily:1 complex:1 constructing:1 aistats:1 main:1 dense:1 motivation:1 repeated:1 x1:2 site:1 deployed:1 slow:2 wish:1 exponential:8 comput:1 pe:1 third:1 bij:2 theorem:9 showing:1 closeness:1 derives:1 intractable:8 importance:1 mirror:3 easier:2 mf:3 simply:6 explore:1 univariate:4 lbfgs:1 corresponds:1 insisting:1 conditional:2 viewed:1 presentation:1 ann:2 experimentally:2 hard:2 included:1 specifically:2 uniformly:2 yuval:2 domke:2 lemma:1 conservative:1 total:8 pas:2 experimental:1 formally:1 select:1 mark:1 treebased:1 scan:2 dobrushin:1 jonathan:1 evaluate:2 mcmc:2 avoiding:5 cowles:1 |
4,381 | 4,965 | Embed and Project:
Discrete Sampling with Universal Hashing
Stefano Ermon, Carla P. Gomes
Dept. of Computer Science
Cornell University
Ithaca NY 14853, U.S.A.
Ashish Sabharwal
IBM Watson Research Ctr.
Yorktown Heights
NY 10598, U.S.A.
Bart Selman
Dept. of Computer Science
Cornell University
Ithaca NY 14853, U.S.A.
Abstract
We consider the problem of sampling from a probability distribution defined over
a high-dimensional discrete set, specified for instance by a graphical model. We
propose a sampling algorithm, called PAWS, based on embedding the set into
a higher-dimensional space which is then randomly projected using universal
hash functions to a lower-dimensional subspace and explored using combinatorial
search methods. Our scheme can leverage fast combinatorial optimization tools
as a blackbox and, unlike MCMC methods, samples produced are guaranteed to
be within an (arbitrarily small) constant factor of the true probability distribution.
We demonstrate that by using state-of-the-art combinatorial search tools, PAWS
can efficiently sample from Ising grids with strong interactions and from software
verification instances, while MCMC and variational methods fail in both cases.
1
Introduction
Sampling techniques are one of the most widely used approaches to approximate probabilistic reasoning for high-dimensional probability distributions where exact inference is intractable. In fact,
many statistics of interest can be estimated from sample averages based on a sufficiently large number of samples. Since this can be used to approximate #P-complete inference problems, sampling is
also believed to be computationally hard in the worst case [1, 2].
Sampling from a succinctly specified combinatorial space is believed to much harder than searching
the space. Intuitively, not only do we need to be able to find areas of interest (e.g., modes of the
underlying distribution) but also to balance their relative importance. Typically, this is achieved
using Markov Chain Monte Carlo (MCMC) methods. MCMC techniques are a specialized form
of local search that only allows moves that maintain detailed balance, thus guaranteeing the right
occupation probability once the chain has mixed. However, in the context of hard combinatorial
spaces with complex internal structure, mixing times are often exponential. An alternative is to
use complete or systematic search techniques such as Branch and Bound for integer programming,
DPLL for SATisfiability testing, and constraint and answer-set programming (CP & ASP), which
are preferred in many application areas, and have witnessed a tremendous success in the past few
decades. It is therefore a natural question whether one can construct sampling techniques based on
these more powerful complete search methods rather than local search.
Prior work in cryptography by Bellare et al. [3] showed that it is possible to uniformly sample
witnesses of an NP language leveraging universal hash functions and using only a small number
of queries to an NP-oracle. This is significant because samples can be used to approximate #Pcomplete (counting) problems [2], a complexity class believed to be much harder than NP. Practical
algorithms based on these ideas were later developed [4?6] to near-uniformly sample solutions of
propositional SATisfiability instances, using a SAT solver as an NP-oracle. However, unlike SAT,
1
most models used in Machine Learning, physics, and statistics are weighted (represented, e.g., as
graphical models) and cannot be handled using these techniques.
We fill this gap by extending this approach, based on hashing-based projections and NP-oracle
queries, to the weighted sampling case. Our algorithm, called PAWS, uses a form of approximation
by quantization [7] and an embedding technique inspired by slice sampling [8], before applying
projections. This parallels recent work [9] that extended similar ideas for unweighted counting to
the weighted counting world, addressing the problem of discrete integration. Although in theory
one could use that technique to produce samples by estimating ratios of discrete integrals [1, 2],
the general sampling-by-counting reduction requires a large number of such estimates (proportional
to the number of variables) for each sample. Further, the accuracy guarantees on the sampling
probability quickly become loose when taking ratios of estimates. In contrast, PAWS is a more
direct and practical sampling approach, providing better accuracy guarantees while requiring a much
smaller number of NP-oracle queries per sample.
Answering NP-oracle queries, of course, requires exponential time in the worst case, in accordance
with the hardness of sampling. We rely on the fact that combinatorial search tools, however, are
often extremely fast in practice, and any complete solver can be used as a black box in our sampling
scheme. Another key advantage is that when combinatorial search succeeds, our analysis provides a
certificate that, with high probability, any samples produced will be distributed within an (arbitrarily
small) constant factor of the desired probability distribution. In contrast, with MCMC methods it
is generally hard to assess whether the chain has mixed. We empirically demonstrate that PAWS
outperforms MCMC as well as variational methods on hard synthetic Ising Models and on a realworld test case generation problem for software verification.
2
Setup and Problem Definition
We are given a probability distribution p over a (high-dimensional) discrete set X , where the probability of each item x ? X is proportional to a weight function w : X ? R+ , with R+ being the set
of non-negative real numbers. Specifically, given x ? X , its probability p(x) is given by
p(x) =
w(x)
,
Z
Z=
X
w(x)
x?X
where Z is a normalization constant known as the partition function. We assume w is specified
compactly, e.g., as the product of factors or in a conjunctive normal form. As our driving example,
we consider the case of undirected discrete graphical models [10] with n = |V | random variables
{xi , i ? V } where each xi takes values in a finite set Xi . We consider a factor graph representation
for a joint probability distribution over elements (or configurations) x ? X = X1 ? ? ? ? ? Xn :
p(x) =
1 Y
w(x)
=
?? ({x}? ).
Z
Z
(1)
??I
Q
This is a compact representation for p(x) based on the weight function w(x) = ??I ?? ({x}? ),
defined as the product of potentials or factors ?? : {x}? 7? R+ , where I is an index set and
{x}? ? V the subset of variables factor ?? depends on. For simplicity of exposition, without loss
of generality, we will focus on the case of binary variables, where X = {0, 1}n .
We consider the fundamental problem of (approximately) sampling from p(x), i.e., designing a
randomized algorithm that takes w as input and outputs elements x ? X according to the probability
distribution p. This is a hard computational problem in the worst case. In fact, it is more general
than NP-complete decision problems (e.g., sampling solutions of a SATisfiability instance specified
as a factor graph entails finding at least one solution, or deciding there is none). Further, samples
can be used to approximate #P-complete problems [2], such as estimating a marginal probability.
3
Sampling by Embed, Project, and Search
Conceptually, our sampling strategy has three steps, described in Sections 3.1, 3.2, and 3.3, resp.
(1) From the input distribution p we construct a new distribution p? that is ?close? to p but more
2
discrete. Specifically, p? is based on a new weight function w? that takes values only in a discrete
set of geometrically increasing weights. (2) From p? , we define a uniform probability distribution
p?? over a carefully constructed higher-dimensional embedding of X = {0, 1}n . The previous
discretization step allows us to specify p?? in a compact form, and sampling from p?? can be seen
to be precisely equivalent to sampling from p? . (3) Finally, we indirectly sample from the desired
distribution p by sampling uniformly from p?? , by randomly projecting the embedding onto a lowerdimensional subspace using universal hash functions and then searching for feasible states.
The first and third steps involve a bounded loss of accuracy, which we can trade off with computational efficiency by setting hyper-parameters of the algorithm. A key advantage is that our technique
reduces the weighted sampling problem to that of solving one MAP query (i.e., finding the most likely
state) and a polynomial number of feasibility queries (i.e., finding any state with non-zero probability) for the original graphical model augmented (through an embedding) with additional variables
and carefully constructed factors. In practice, we use a combinatorial optimization package, which
requires exponential time in the worst case (consistent with the hardness of sampling) but is often fast
in practice. Our analysis shows that whenever the underlying combinatorial search and optimization
queries succeed, the samples produced are guaranteed, with high probability, to be coming from an
approximately accurate distribution.
3.1
Weight Discretization
We use a geometric discretization of the weights into ?buckets?, i.e., a uniform discretization of the
log-probability. As we will see, ?(n) buckets are sufficient to preserve accuracy.
Definition 1. Let M = maxx w(x), r > 1, ? > 0, and ? = ?logr (2n /?)?. Partition the
conM
figurations into the following weight based disjoint buckets: Bi = {x | w(x) ? rM
i+1 , r i }, i =
0, . . . , ? ? 1 and B? = {x | w(x) ? [0, M
]}. The discretized weight function w? : {0, 1}n ? R+ is
r?
M
?
defined as follows: w (x) = ri+1 if x ? Bi for i < ? and w? (x) = 0 if x ? B? . The corresponding
discretized probability distribution p? (x) = w? (x)/Z ? where Z ? is the normalization constant.
?
2
Lemma 1. Let ? = rP
/(1 ? ?). For all x ? ?l?1
i=0 B? , p(x) and p (x) are within a factor of ? of each
other. Furthermore, x?B? p(x) ? ?.
Proof. Since w maps to non-negative values, we have Z ? M . Further,
X
1
M
|B? | ?M
?M
1 X
w(x) ? |B? | ? = n
?
? ?.
p(x) =
Z
Z
r
2 Z
Z
x?B?
x?B?
This proves the second part of the claim. For the first part, note that by construction, Z ? ? Z and
!
??1 X
? X
X
X
X
1
1
w? (x) ?
Z? =
w(x) ? (1 ? ?)Z.
Z?
w(x) =
r
r
i=0
i=0
x?Bi
x?Bi
x?B?
Thus Z and Z ? are within a factor of r/(1??) of each other. For all x such that w(x) ?
/ Bn , recalling
that r > 1 > 1 ? ? and that w(x)/r ? w? (x) ? rw(x), we have
1
w(x)
w(x)
w? (x)
w? (x)
rw(x)
r2 w(x)
?
p(x) ?
?
?
=
p
(x)
=
?
?
= ?p(x).
?
rZ
rZ ?
Z?
Z?
Z?
1?? Z
This finishes the proof that p(x) and p? (x) are within a factor of ? of each other.
Remark 1. If the weights w defined by the original graphical model are represented in finite precision (e.g., there are 264 possible weights in double precision floating point), for every b ? 1 there
is a possibly large but finite value of ? (such that M/r? is smaller than the smallest representable
weight) such that B? is empty and the discretization error ? is effectively zero.
3.2
Embed: From Weighted to Uniform Sampling
We now show how to reduce the problem of sampling from the discrete distribution p? (weighted
sampling) to the problem of uniformly sampling, without loss of accuracy, from a higherdimensional discrete set into which X = {0, 1}n is embedded. This is inspired by slice sampling [8],
and can be intuitively understood as its discrete counterpart where we uniformly sample points (x, y)
from a discrete representation of the area under the (y vs. x) probability density function of p? .
3
Definition 2. Let w : X ? R+ , M = maxx w(x), and r = 2b /(2b ? 1). Then the embedding
S(w, ?, b) of X in X ? {0, 1}(??1)b is defined as:
(
)
b
_
M
M
b?1 b
S(w, ?, b) =
x, y11 , y12 , . . . , y??1
, y??1 w(x) ? i ?
yik , 1 ? i ? ? ? 1; w(x) > ? .
r
r
k=1
Wb
Pb
where
may alternatively be thought of as the linear constraint k=1 yik ? 1. Further, let
p?? denote a uniform probability distribution over S(w, ?, b) and n? = n + (? ? 1)b.
k
k=1 yi
Given a compact representation of w within a combinatorial search or optimization framework, the
set S(w, ?, b) can often be easily encoded using the disjunctive constraints on the y variables.
1
b
Lemma 2. Let (x, y) = (x, y11 , y12 , ? ? ? , y1b , y21 , ? ? ? , y2b , ? ? ? , y??1
? ? ? , y??1
) be a sample from p?? ,
i.e., a uniformly sampled element from S(w, ?, b). Then x is distributed according to p? .
Informally, given x ? Bi and x? ? Bi+1 with i + 1 ? l ? 1, there are precisely r = 2b /(2b ? 1)
times more valid configurations (x, y) than (x? , y ? ). Thus x is sampled r times more often than x? .
A formal proof may be found in the Appendix.
3.3
Project and Search: Uniform Sampling with Hash Functions and an NP-oracle
In principle, using the technique of Bellare et al. [3] and n? -wise independent hash functions we can
sample purely uniformly from S(w, ?, b) using an NP oracle to answer feasibility queries. However,
such hash functions involve constructions that are difficult to implement and reason about in existing combinatorial search methods. Instead, we use a more practical algorithm based on pairwise
independent hash functions that can be implemented using parity constraints (modular arithmetic)
and still provides accuracy guarantees. The approach is similar to [5], but we include an algorithmic
way to estimate the number of parity constraints to be used. We also use the pivot technique from
[6] but extend that work in two ways: we introduce a parameter ? (similar to [5]) that allows us to
trade off uniformity against runtime and also provide upper bounds on the sampling probabilities.
We refer to our algorithm as PArity-basedWeightedSampler (PAWS) and provide its pseudocode as
Algorithm 1. The idea is to project by randomly constraining the configuration space using a family
of universal hash functions, search for up to P ?surviving? configurations, and then, if fewer than P
survive, perform rejection sampling to choose one of them. The number k of constraints or factors
(encoding a randomly chosen hash function) to add is determined first; this is where we depart from
both Gomes et al. [5], who do not provide a way to compute k, and Chakraborty et al. [6], who do
not fix k or provide upper bounds. Then we repeatedly add k such constraints, check whether fewer
than P configurations survive, and if so output one configuration chosen using rejection sampling.
Intuitively, we need the hashed space to contain no more than P solutions because that is a base case
where we know how to produce uniform samples via enumeration. k is a guess (accurate with high
probability) of the number of constraints that is likely to reduce (by hashing) the original problem
to a situation where enumeration is feasible. If too many or too few configurations survive, the
algorithm fails and is run again. The small failure probability, accounting for a potentially poor
choice of random hash functions, can be bounded irrespective of the underlying graphical model.
A combinatorial optimization procedure is used once in order to determine the maximum weight
M through MAP inference. M is used in the discretization step. Subsequently, several feasibility
queries are issued to the underlying combinatorial search procedure in order to, e.g., count the
number of surviving configurations and produce one as a sample.
We briefly review the construction and properties of universal hash functions [11, 12].
Definition 3. H = {h : {0, 1}n ? {0, 1}m } is a family of pairwise independent hash functions
if the following two conditions hold when a function H is chosen uniformly at random from H: 1)
?x ? {0, 1}n , the random variable H(x) is uniformly distributed in {0, 1}m ; 2) ?x1 , x2 ? {0, 1}n
x1 6= x2 , the random variables H(x1 ) and H(x2 ) are independent.
Proposition 1. Let A ? {0, 1}m?n , c ? {0, 1}m . The family H = {hA,c (x) : {0, 1}n ? {0, 1}m }
where hA,c (x) = Ax + c mod 2 is a family of pairwise independent hash functions.
Further, H is also known to be a family of three-wise independent hash functions [5].
4
Algorithm 1 Algorithm PAWS for sampling configurations ? according to w
1: procedure C OMPUTE K(n? , ?, P , S)
2:
T ? 24 ?ln (n? /?)? ; k ? ?1 ; count ? 0
3:
repeat
4:
k ? k + 1 ; count ? 0
5:
for t = 1, ? ? ? , T do
?
6:
Sample hash function hkA,c : {0, 1}n ? {0, 1}k
k,t
k
7:
Let S , {(x, y) ? S, hA,c (x, y) = 0}
8:
if |S k,t | < P then
/* search for ? P different elements */
9:
count ? count + 1
10:
end for
11:
until count ? ?T /2? or k = n?
12:
return k
13: end procedure
14: procedure PAWS(w : {0, 1}n ? R+ , ?, b, ?, P , ?)
15:
M ? maxx w(x)
/* compute with one MAP inference query on w
16:
S ? S(w, ?, b); n? ? n + b(? ? 1)
/* as in Definition 2
17:
i ? C OMPUTE K(n? , ?, ?, P , S) + ?
?
?
18:
Sample hash fn. hiA,c : {0, 1}n ? {0, 1}i , i.e., uniformly choose A ? {0, 1}i?n , c ? {0, 1}i
i
i
19:
Let S , {(x, y) ? S, hA,c (x, y) = 0}
20:
Check if |S i | ? P by searching for at least P different elements
21:
if |S i | ? P or |S i | = 0 then
22:
return ?
/* failure
23:
else
24:
Fix an arbitrary ordering of S i
/* for rejection sampling
25:
Uniformly sample p from {0, 1, . . . , P ? 1}
26:
if p ? |S i | then
27:
Select p-th element (x, y) of S i ; return x
28:
else
29:
return ?
/* failure
30: end procedure
*/
*/
*/
*/
*/
Lemma 3 (see Appendix for a proof) shows that the subroutine C OMPUTE K in Algorithm 1 outputs
with high probability a value close to log(|S(w, ?, b)|/P ). The idea is similar to an unweighted
version of the WISH algorithm [9] but with tighter guarantees and using more feasibility queries.
?
Lemma 3. Let S = S(w, ?, b) ? {0, 1}n , ? > 0, and ? > 0. Further, let P ? min{2, 2?+2 /(2? ?
1)2 }, Z = |S|, kP? = log(Z/P ), and k be the output of procedure C OMPUTE K(n? , ?, P, S). Then,
P[kP? ? ? ? k ? kP? + 1 + ?] ? 1 ? ? and C OMPUTE K uses O(n? ln (n? /?)) feasibility queries.
?
?
Lemma 4. Let S = S(w, ?, b) ? {0, 1}n , ? > 0, P ? 2, and ? = log (P + 2 P + 1 + 2)/P .
For any ? ? Z, ? > ?, let c(?, P ) = 1 ? 2??? /(1 ? P1 ? 2??? )2 . Then with probability
at least 1 ? ? the following holds: PAWS(w, ?, b, ?, P , ?) outputs a sample with probability
at least c(?, P )2?(?+?+1) PP?1 and, conditioned on outputting a sample, every element (x, y) ?
S(w, ?, b) is selected (Line 27) with probability p?s (x, y) within a constant factor c(?, P ) of the
uniform probability p?? (x, y) = 1/|S|.
Proof Sketch. For lack of space, we defer details to the Appendix. Briefly, the probability P[? ? S i ]
that ? = (x, y) survives is 2?i by the properties of the hash functions in Definition 3, and the
probability of being selected by rejection sampling is 1/(P ? 1). Conditioned on ? surviving, the
mean and variance of the size of the surviving set |S i | are independent of ? because of 3-wise
independence. When kP? ? ? ? k ? kP? + 1 + ? and i = k + ?, ? > ?, on average |S i | < P and
the size is concentrated around the mean. Using Chebychev?s inequality, one can upper bound by
1 ? c(?, P ) the probability P[Si ? P | ? ? S i ] that the algorithm fails because |Si | is too large.
Note that the bound is independent of ? and lets us bound the probability ps (?) that ? is output:
?i
2?i
2???
2
2?i
c(?, P )
= 1?
?
p
(?)
?
.
(2)
s
1
P ?1
P ?1
(1 ? P ? 2??? )2 P ? 1
5
From i = k + ? ? kP? + 1 + ? + ? and summing the lower bound of ps (?) over all ?, we obtain the
desired lower bound on the success probability. Note that given ?, ? ? , ps (?) and ps (? ? ) are within
a constant factor c(?, P ) of each other from (2). Therefore, the probabilities p?s (?) (for various ?)
that ? is output conditioned
P on outputting a sample are also within a constant factor of each other.
From the normalization ? p?s (?) = 1, one gets the desired result that p?s (x, y) is within a constant
factor c(?, P ) of the uniform probability p?? (x, y) = 1/|S|.
3.4
Main Results: Sampling with Accuracy Guarantees
Combining pieces from the previous three sections, we have the following main result:
Theorem 1. Let w : {0, 1}n ? R+ , ? > 0, b ? 1, ? > 0, and P ? 2. Fix ? ? Z as in Lemma 4,
b
r = 2P
/(2b ?1), ? = ?logr (2n /?)?, ? = r2 /(1??), bucket B? as in Definition 1, and ? = 1/c(?, P ).
Then x?B? p(x) ? ? and with probability at least (1 ? ?)c(?, P )2?(?+?+1) PP?1 , PAWS(w, ?, b,
?, P , ?) succeeds and outputs a sample ? from {0, 1}n \ B? . Upon success, each ? ? {0, 1}n \ B?
is output with probability p?s (?) within a constant factor ?? of the desired probability p(?) ? w(?).
Proof. Success probability follows from Lemma 4. For x ? {0, 1}n \ B? , combining Lemmas 1, 2,
4 we obtain
X
X
1
1 ??
1
p(x) ? p? (x) =
p (x, y) ?
p?s (x, y) = p?s (x)
??
?
?
y:(x,y)?S(w,?,b)
y|(x,y)?S(w,?,b)
X
?
?p?? (x, y) = ?p? (x) ? ??p(x)
y:(x,y)?S(w,?,b)
where the first inequality accounts for discretization error from p(x) to p? (x) (Lemma 1), equality
follows from Lemma 2, and the sampling error between p?? and p?s is bounded by Lemma 4. The rest
is proved in Lemmas 1, 2.
Remark 2. By appropriately setting the hyper-parameters b and ? we can make the discretization
errors ? and ? arbitrarily small. Although this does not change the number of required feasibility
queries, it can significantly increase the runtime of combinatorial search because of the increased
search space size |S(w, ?, b)|. Practically, one should set these parameters as large as possible, while
ensuring combinatorial searches can be completed within the available time budget. Increasing parameter P improves the accuracy as well, but also increases the number of feasibility queries issued,
which is proportional to P (but does not affect the structure of the search space). Similarly, by
increasing ? we can make ? arbitrarily small. However, the probability of success of the algorithm
decreases exponentially as ? is increased. We will demonstrate in the next section that a practical tradeoff between computational complexity and accuracy can be achieved for reasonably sized
problems of interest.
Corollary 2. Let w, b, ?, ?, ?, P, ?, and B? be as in Theorem 1, and p?s (?) be the output distribution
of PAWS(w, ?, b, ?, P , ?). Let ? : {0, 1}n ? R and ?? = maxx?B? |?(x)| ? k?k? . Then,
1
Ep? [?] ? ??? ? Ep [?] ? ??Ep?s [?] + ???
?? s
where Ep?s [?] can be approximated with a sample average using samples produced by PAWS.
4
Experiments
We evaluate PAWS on synthetic Ising Models and on a real-world test case generation problem for
software verification. All experiments used Intel Xeon 5670 3GHz machines with 48GB RAM.
4.1
Ising Grids Models
We first consider the marginal computation task for synthetic grid-structured Ising models with
random interactions (attractive and mixed). Specifically, the corresponding graphical model has n
binary variables xi , i = 1, ? ? ? , n, with single node potentials ?i (xi ) = exp(fi xi ) and pairwise
6
1
1
0.9
0.9
Gibbs
Belief Propagation
WISH
PAWS b=1
PAWS b=2
0.7
0.6
0.5
0.4
0.3
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Gibbs
Belief Propagation
WISH
PAWS b=1
PAWS b=2
0.8
Estimated marginals
Estimated marginals
0.8
0.7
0.8
0.9
0
0.16
1
True marginals
0.165
0.17
0.175
0.18
0.185
0.19
True marginals
(a) Mixed (w = 4.0,f = 0.6)
(b) Attractive (w = 3.0,f = 0.45)
Figure 1: Estimated marginals vs. true marginals on 8 ? 8 Ising Grid models. Closeness to the 45
degree line indicates accuracy. PAWS is run with b ? {1, 2}, P = 4, ? = 1, and ? = 25 (mixed
case) or ? = 40 (attractive case).
interactions ?ij (xi , xj ) = exp(wij xi xj ), where fi ?R [?f, f ] and wij ?R [?w, w] in the mixed
case, while wij ?R [0, w] in the attractive case.
Our implementation of PAWS uses the open source solver ToulBar2 [13] to compute M =
maxx w(x) and as an oracle to check the existence of at least P different solutions. We augmented ToulBar2 with the IBM ILOG CPLEX CP Optimizer 12.3 [14] based on techniques borrowed from [15] to efficiently reason about parity constraints (the hash functions) using GaussJordan elimination. We run the subroutine C OMPUTE K in Algorithm 1 only once at the beginning,
and then generate all the samples with the same value of i (Line 17). The comparison is with Gibbs
sampling, Belief Propagation, and the recent WISH algorithm [9]. Ground truth is obtained using
the Junction Tree method [16].
In Figure 1, we show a scatter plot of the estimated vs. true marginal probabilities for two Ising
grids with mixed and attractive interactions, respectively, representative of the general behavior in
the large-weights regime. Each sampling method is run for 10 minutes. Marginals computed with
Gibbs sampling (run for about 108 iterations) are clearly very inaccurate (far from the 45 degree
line), an indication that the Markov Chain had not mixed as an effect of the relatively large weights
that tend to create barriers between modes which are hard to traverse. In contrast, samples from
PAWS provide much more accurate marginals, in part because it does not rely on local search and
hence is not directly affected by the energy landscape (with respect to the Hamming metric). Further,
we see that we can improve the accuracy by increasing the hyper-parameter b. These results highlight
the practical value of having accuracy guarantees on the quality of the samples after finite amounts
of time vs. MCMC-style guarantees that hold only after a potentially exponential mixing time.
Belief Propagation can be seen from Figure 1 to be quite inaccurate in this large-weights regime. Finally, we also compare to the recent WISH algorithm [9] which uses similar hash-based techniques
to estimate the partition function of graphical models. Since producing samples with the general
sampling-by-counting reduction [1, 2] or estimating each marginal as the ratio of two partition functions (with and without a variable clamped) would be too expensive (requiring n + 1 calls to WISH)
we heuristically run it once and use the solutions of the optimization instances it solves in the inner
loop as samples. We see in Figure 1 that while samples produced by WISH can sometimes produce
fairly accurate marginal estimates, these estimates can also be far from the true value because of an
inherent bias introduced by the arg max operator.
4.2
Test Case Generation for Software Verification
Hardware and software verification tools are becoming increasingly important in industrial system
design. For example, IBM estimates $100 million savings over the past 10 years from hardware verification tools alone [17]. Given that complete formal verification is often infeasible, the paradigm
of choice has become that of randomly generating ?interesting? test cases to stress the code or chip
7
0.018
Theoretical
Sample Frequency
0.016
Vars Factors Time (s) MSE (?10
330
785
1710
5.76
bench431
173
410
34.97
4.35
bench115
189
458
52.75
20.74
bench97
170
401
67.03
45.57
bench590
244
527
593.71
8.11
bench105
243
524
842.35
8.56
)
0.014
0.012
Frequency
Instance
bench1039
?5
0.01
0.008
0.006
0.004
0.002
0
0
100
200
300
400
500
600
700
800
Solution
(a) Marginals: runtime and mean squared error
(b) True vs. observed sampling frequencies.
Figure 2: Experiments on software verification benchmark.
with the hope of uncovering bugs. Typically, a model based on hard constraints is used to specify
consistent input/output pairs, or valid program execution traces. In addition, in some systems, domain knowledge can be specified by experts in the form of soft constraints, for instance to introduce
a preference for test cases where operands are zero and bugs are more likely [17].
For our experiments, we focus on software (SW) verification, using an industrial benchmark [18]
produced by Microsoft?s SAGE system [19, 20]. Each instance defines a uniform probability distribution over certain valid traces of a computer program. We modify this benchmark by introducing
soft constraints defining a weighted distribution over valid traces, indicating traces that meet certain
criteria should be sampled more often. Specifically, following Naveh et al. [17] we introduce a preference towards traces where certain registers are zero. The weight is chosen to be a power of two,
so that there is no loss of accuracy due to discretization using the previous construction with b = 1.
These instances are very difficult for MCMC methods because of the presence of very large regions
of zero probability that cannot be traversed and thus can break the ergodicity assumption. Indeed we
observed that Gibbs sampling often fails to find a non-zero probability state, and when it finds one
it gets stuck there, because there might not be a non-zero probability path from one feasible state
to another. In contrast, our sampling strategy is not affected and does not require any ergodicity
assumption. Table 2a summarizes the results obtained using the propositional satisfiability (SAT)
solver CryptoMiniSAT [21] as the feasibility query oracle for PAWS. CryptoMiniSAT has built-in
support for parity constraints Ax = c mod 2. We report the time to collect 1000 samples and
the Mean Squared Error (MSE) of the marginals estimated using these samples. We report results
only on the subset of instances where we could enumerate all feasible states using the exact model
counter Relsat [22] in order to obtain ground truth marginals for MSE computation. We see that
PAWS scales to fairly large instances with hundreds of variables and gives accurate estimates of
the marginals. Figure 2b shows the theoretical vs. observed sampling frequencies (based on 50000
samples) for a small instance with 810 feasible states (execution traces), where we see that the output
distribution p?s is indeed very close to the target distribution p.
5
Conclusions
We introduced a new approach, called PAWS, to the fundamental problem of sampling from a discrete probability distribution specified, up to a normalization constant, by a weight function, e.g., by
a discrete graphical model. While traditional sampling methods are based on the MCMC paradigm
and hence on some form of local search, PAWS can leverage more advanced combinatorial search
and optimization tools as a black box. A significant advantage over MCMC methods is that PAWS
comes with a strong accuracy guarantee: whenever combinatorial search succeeds, our analysis
provides a certificate that, with high probability, the samples are produced from an approximately
correct distribution. In contrast, accuracy guarantees for MCMC methods hold only in the limit, with
unknown and potentially exponential mixing times. Further, the hyper-parameters of PAWS can be
tuned to trade off runtime with accuracy. Our experiments demonstrate that PAWS outperforms
competing sampling methods on challenging domains for MCMC.
8
References
[1] N.N. Madras. Lectures on Monte Carlo Methods. American Mathematical Society, 2002.
ISBN 0821829785.
[2] M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration. Approximation algorithms for NP-hard problems, pages 482?
520, 1997.
[3] Mihir Bellare, Oded Goldreich, and Erez Petrank. Uniform generation of NP-witnesses using
an NP-oracle. Information and Computation, 163(2):510?526, 2000.
[4] Stefano Ermon, Carla P. Gomes, and Bart Selman. Uniform solution sampling using a constraint solver as an oracle. In UAI, pages 255?264, 2012.
[5] C.P. Gomes, A. Sabharwal, and B. Selman. Near-uniform sampling of combinatorial spaces
using XOR constraints. In NIPS-2006, pages 481?488, 2006.
[6] S. Chakraborty, K. Meel, and M. Vardi. A scalable and nearly uniform generator of SAT
witnesses. In CAV-2013, 2013.
[7] Vibhav Gogate and Pedro Domingos. Approximation by quantization. In UAI, pages 247?255,
2011.
[8] Radford M Neal. Slice sampling. Annals of statistics, pages 705?741, 2003.
[9] Stefano Ermon, Carla Gomes, Ashish Sabharwal, and Bart Selman. Taming the curse of dimensionality: Discrete integration by hashing and optimization. In ICML, 2013.
[10] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[11] S. Vadhan. Pseudorandomness. Foundations and Trends in Theoretical Computer Science,
2011.
[12] O. Goldreich. Randomized methods in computation. Lecture Notes, 2011.
[13] D. Allouche, S. de Givry, and T. Schiex. Toulbar2, an open source exact cost function network
solver. Technical report, INRIA, 2010.
[14] IBM ILOG. IBM ILOG CPLEX Optimization Studio 12.3, 2011.
[15] Carla P. Gomes, Willem Jan van Hoeve, Ashish Sabharwal, and Bart Selman. Counting CSP
solutions using generalized XOR constraints. In AAAI, 2007.
[16] Steffen L Lauritzen and David J Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society.
Series B (Methodological), pages 157?224, 1988.
[17] Yehuda Naveh, Michal Rimon, Itai Jaeger, Yoav Katz, Michael Vinov, Eitan s Marcu, and Gil
Shurek. Constraint-based random stimuli generation for hardware verification. AI magazine,
28(3):13, 2007.
[18] Clark Barrett, Aaron Stump, and Cesare Tinelli. The Satisfiability Modulo Theories Library
(SMT-LIB). www.SMT-LIB.org, 2010.
[19] Patrice Godefroid, Michael Y Levin, David Molnar, et al. Automated whitebox fuzz testing.
In NDSS, 2008.
[20] Patrice Godefroid, Michael Y. Levin, and David Molnar. Sage: Whitebox fuzzing for security
testing. Queue, 10(1):20:20?20:27, January 2012. ISSN 1542-7730.
[21] M. Soos, K. Nohl, and C. Castelluccia. Extending SAT solvers to cryptographic problems. In
SAT-2009. Springer, 2009.
[22] Robert J Bayardo and Joseph Daniel Pehoushek. Counting models using connected components. In AAAI-2000, pages 157?162, 2000.
9
| 4965 |@word version:1 briefly:2 polynomial:1 chakraborty:2 nd:1 open:2 heuristically:1 bn:1 accounting:1 harder:2 reduction:2 configuration:9 series:1 paw:27 daniel:1 tuned:1 past:2 outperforms:2 existing:1 discretization:9 michal:1 si:2 scatter:1 conjunctive:1 givry:1 fn:1 partition:4 plot:1 bart:4 hash:19 v:6 fewer:2 guess:1 item:1 selected:2 alone:1 beginning:1 provides:3 certificate:2 node:1 traverse:1 preference:2 org:1 height:1 mathematical:1 constructed:2 direct:1 become:2 introduce:3 pairwise:4 indeed:2 hardness:2 behavior:1 p1:1 blackbox:1 steffen:1 discretized:2 inspired:2 enumeration:2 curse:1 solver:7 increasing:4 lib:2 project:4 estimating:3 underlying:4 bounded:3 developed:1 finding:3 guarantee:9 every:2 runtime:4 fuzzing:1 rm:1 producing:1 before:1 understood:1 local:5 accordance:1 modify:1 limit:1 encoding:1 meet:1 path:1 becoming:1 approximately:3 black:2 might:1 inria:1 collect:1 challenging:1 bi:6 practical:5 testing:3 practice:3 yehuda:1 implement:1 procedure:7 jan:1 area:3 universal:6 maxx:5 thought:1 significantly:1 projection:2 get:2 cannot:2 close:3 onto:1 operator:1 context:1 applying:1 www:1 equivalent:1 map:4 schiex:1 simplicity:1 fill:1 embedding:6 searching:3 resp:1 construction:4 target:1 annals:1 magazine:1 exact:3 programming:2 modulo:1 us:4 designing:1 domingo:1 element:7 trend:2 approximated:1 expensive:1 marcu:1 ising:7 cesare:1 ep:4 disjunctive:1 observed:3 worst:4 region:1 connected:1 ordering:1 trade:3 decrease:1 counter:1 complexity:2 uniformity:1 solving:1 purely:1 upon:1 efficiency:1 compactly:1 easily:1 joint:1 chip:1 goldreich:2 represented:2 various:1 fast:3 monte:3 kp:6 query:15 hyper:4 quite:1 encoded:1 widely:1 modular:1 statistic:3 jerrum:1 patrice:2 advantage:3 indication:1 isbn:1 propose:1 outputting:2 interaction:4 product:2 coming:1 combining:2 loop:1 mixing:3 bug:2 double:1 empty:1 extending:2 p:4 produce:4 generating:1 guaranteeing:1 jaeger:1 ij:1 lauritzen:1 naveh:2 borrowed:1 solves:1 strong:2 implemented:1 come:1 sabharwal:4 correct:1 subsequently:1 ermon:3 elimination:1 require:1 fix:3 proposition:1 tighter:1 traversed:1 hold:4 practically:1 sufficiently:1 around:1 ground:2 normal:1 deciding:1 exp:2 algorithmic:1 claim:1 driving:1 optimizer:1 smallest:1 combinatorial:18 create:1 tool:6 weighted:7 survives:1 hope:1 clearly:1 csp:1 rather:1 asp:1 cornell:2 corollary:1 ax:2 focus:2 methodological:1 check:3 indicates:1 contrast:5 industrial:2 inference:5 inaccurate:2 typically:2 wij:3 subroutine:2 arg:1 uncovering:1 art:1 integration:3 fairly:2 marginal:5 once:4 construct:2 having:1 saving:1 sampling:51 survive:3 nearly:1 icml:1 np:13 report:3 stimulus:1 inherent:1 few:2 randomly:5 preserve:1 floating:1 cplex:2 maintain:1 microsoft:1 recalling:1 interest:3 chain:5 accurate:5 integral:1 tree:1 pseudorandomness:1 desired:5 theoretical:3 instance:12 witnessed:1 increased:2 wb:1 xeon:1 soft:2 yoav:1 allouche:1 cost:1 introducing:1 addressing:1 subset:2 conm:1 uniform:13 hundred:1 levin:2 too:4 answer:2 synthetic:3 density:1 fundamental:2 randomized:2 probabilistic:1 systematic:1 physic:1 off:3 michael:3 ashish:3 quickly:1 ctr:1 squared:2 aaai:2 again:1 choose:2 possibly:1 sinclair:1 american:1 expert:2 style:1 logr:2 return:4 account:1 potential:2 de:1 stump:1 register:1 depends:1 piece:1 later:1 break:1 dpll:1 parallel:1 defer:1 ass:1 accuracy:16 xor:2 variance:1 who:2 efficiently:2 landscape:1 conceptually:1 produced:7 none:1 carlo:3 whenever:2 higherdimensional:1 definition:7 against:1 failure:3 energy:1 pp:2 frequency:4 proof:6 hamming:1 sampled:3 proved:1 knowledge:1 improves:1 satisfiability:5 dimensionality:1 carefully:2 hashing:4 higher:2 specify:2 box:2 generality:1 furthermore:1 ergodicity:2 until:1 sketch:1 lack:1 propagation:4 defines:1 mode:2 quality:1 vibhav:1 effect:1 requiring:2 true:7 contain:1 counterpart:1 equality:1 hence:2 y12:2 neal:1 attractive:5 yorktown:1 criterion:1 generalized:1 stress:1 complete:7 demonstrate:4 cp:2 stefano:3 hka:1 reasoning:1 variational:3 wise:3 fi:2 specialized:1 pseudocode:1 operand:1 empirically:1 exponentially:1 million:1 extend:1 katz:1 marginals:12 significant:2 refer:1 gibbs:5 ai:1 grid:5 similarly:1 erez:1 language:1 had:1 entail:1 hashed:1 add:2 base:1 showed:1 recent:3 issued:2 certain:3 inequality:2 binary:2 watson:1 arbitrarily:4 success:5 yi:1 seen:2 additional:1 lowerdimensional:1 determine:1 paradigm:2 arithmetic:1 branch:1 reduces:1 technical:1 believed:3 dept:2 feasibility:8 ensuring:1 scalable:1 y21:1 metric:1 iteration:1 normalization:4 sometimes:1 achieved:2 ompute:6 addition:1 else:2 source:2 ithaca:2 appropriately:1 rest:1 unlike:2 smt:2 tend:1 undirected:1 leveraging:1 mod:2 jordan:1 integer:1 surviving:4 call:1 near:2 leverage:2 counting:8 constraining:1 presence:1 vadhan:1 automated:1 independence:1 finish:1 affect:1 xj:2 competing:1 reduce:2 idea:4 inner:1 tradeoff:1 pivot:1 whether:3 handled:1 gb:1 queue:1 hia:1 remark:2 repeatedly:1 yik:2 generally:1 enumerate:1 detailed:1 involve:2 informally:1 amount:1 bellare:3 concentrated:1 hardware:3 rw:2 generate:1 gil:1 estimated:6 disjoint:1 per:1 discrete:15 itai:1 affected:2 key:2 pb:1 ram:1 graph:2 cryptominisat:2 geometrically:1 bayardo:1 year:1 realworld:1 package:1 run:6 powerful:1 family:6 eitan:1 decision:1 appendix:3 summarizes:1 bound:8 guaranteed:2 oracle:11 constraint:17 precisely:2 ri:1 software:7 x2:3 extremely:1 min:1 relatively:1 structured:1 according:3 representable:1 poor:1 smaller:2 increasingly:1 joseph:1 intuitively:3 projecting:1 bucket:4 computationally:1 ln:2 loose:1 fail:1 count:6 know:1 end:3 available:1 junction:1 willem:1 indirectly:1 alternative:1 rp:1 existence:1 original:3 rz:2 include:1 completed:1 graphical:11 sw:1 madras:1 prof:1 society:2 move:1 question:1 depart:1 strategy:2 traditional:1 subspace:2 reason:2 code:1 issn:1 index:1 gogate:1 ratio:3 balance:2 providing:1 setup:1 difficult:2 robert:1 potentially:3 trace:6 negative:2 sage:2 y11:2 implementation:1 design:1 cryptographic:1 unknown:1 perform:1 rimon:1 upper:3 fuzz:1 ilog:3 markov:3 benchmark:3 finite:4 january:1 situation:1 witness:3 extended:1 defining:1 arbitrary:1 introduced:2 propositional:2 pair:1 required:1 specified:6 david:3 security:1 chebychev:1 tremendous:1 nip:1 able:1 regime:2 program:2 built:1 max:1 royal:1 belief:4 wainwright:1 power:1 natural:1 rely:2 advanced:1 scheme:2 improve:1 spiegelhalter:1 library:1 nohl:1 irrespective:1 taming:1 prior:1 geometric:1 review:1 relative:1 occupation:1 embedded:1 loss:4 lecture:2 highlight:1 mixed:8 generation:5 interesting:1 proportional:3 var:1 generator:1 clark:1 foundation:2 degree:2 verification:10 consistent:2 sufficient:1 principle:1 toulbar2:3 ibm:5 succinctly:1 course:1 repeat:1 parity:5 infeasible:1 formal:2 bias:1 taking:1 barrier:1 distributed:3 slice:3 ghz:1 van:1 xn:1 world:2 valid:4 unweighted:2 selman:5 stuck:1 projected:1 far:2 meel:1 approximate:5 compact:3 preferred:1 uai:2 sat:6 summing:1 gomes:6 xi:8 alternatively:1 search:24 decade:1 table:1 reasonably:1 mse:3 complex:1 domain:2 main:2 y1b:1 vardi:1 cryptography:1 x1:4 augmented:2 intel:1 representative:1 oded:1 ny:3 precision:2 fails:3 wish:7 exponential:6 clamped:1 answering:1 third:1 theorem:2 minute:1 embed:3 explored:1 r2:2 barrett:1 closeness:1 intractable:1 quantization:2 effectively:1 importance:1 execution:2 conditioned:3 budget:1 studio:1 gap:1 rejection:4 carla:4 likely:3 radford:1 pedro:1 springer:1 truth:2 succeed:1 sized:1 exposition:1 towards:1 feasible:5 hard:8 change:1 cav:1 specifically:4 determined:1 uniformly:11 lemma:12 called:3 succeeds:3 indicating:1 select:1 aaron:1 internal:1 support:1 evaluate:1 mcmc:12 |
4,382 | 4,966 | Learning Stochastic Inverses
?
Andreas Stuhlmuller
Brain and Cognitive Sciences
MIT
Jessica Taylor
Department of Computer Science
Stanford University
Noah D. Goodman
Department of Psychology
Stanford University
Abstract
We describe a class of algorithms for amortized inference in Bayesian networks.
In this setting, we invest computation upfront to support rapid online inference
for a wide range of queries. Our approach is based on learning an inverse factorization of a model?s joint distribution: a factorization that turns observations into
root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These stochastic inverses
can be used to invert each of the computation steps leading to an observation,
sampling backwards in order to quickly find a likely explanation. We show that
estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the Inverse
MCMC algorithm, which uses stochastic inverses to make block proposals for a
Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets.
1
Introduction
Bayesian inference is computationally expensive. Even approximate, sampling-based algorithms
tend to take many iterations before they produce reasonable answers. In contrast, human recognition
of words, objects, and scenes is extremely rapid, often taking only a few hundred milliseconds?only
enough time for a single pass from perceptual evidence to deeper interpretation. Yet human perception and cognition are often well-described by probabilistic inference in complex models. How can
we reconcile the speed of recognition with the expense of coherent probabilistic inference? How can
we build systems, for applications like robotics and medical diagnosis, that exhibit similarly rapid
performance at challenging inference tasks?
One response to such questions is that these problems are not, and should not be, solved from scratch
each time they are encountered. Humans and robots are in the setting of amortized inference: they
have to solve many similar inference problems, and can thus offload part of the computational work
to shared precomputation and adaptation over time. This raises the question of which kinds of
precomputation and adaptation are useful. There is substantial previous work on adaptive inference
algorithms, including Cheng and Druzdzel (2000); Haario et al. (2006); Ortiz and Kaelbling (2000);
Roberts and Rosenthal (2009). While much of this work is focused on adaptation for a single
posterior inference, amortized inference calls for adaptation across many different inferences. In
this setting, we will often have considerable training data available in the form of posterior samples
from previous inferences; how should we use this data to adapt our inference procedure?
We consider using training samples to learn the inverse structure of a directed model. Posterior
inference is the task of inverting a probabilistic model: Bayes? theorem turns p(d|h) into p(h|d);
vision is commonly understood as inverse graphics (Horn, 1977) and, more recently, as inverse
physics (Sanborn et al., 2013; Watanabe and Shimojo, 2001); and conditional inference in probabilistic programs can be described as ?running a program backwards? (e.g., Wingate and Weber,
2013). However, while this is a good description of the problem that inference solves, conditional
sampling usually does not proceed backwards step-by-step. We suggest taking this view more liter1
?4 ?2 0
?10
Observation
Observation
0
10
20
30
2
4
6
0
6
illumination
2
4
10
8
Illumination
luminance
Luminance
Luminance
?20?10 0 10 20 30
Luminance
Reflectance
20
30
Illumination
10
Reflectance
Gaussian
0
Illumination
Gamma
(noisify
luminance)
Observation
Gaussian
?10
Gamma
Reflectance
reflectance
Luminance
luminance
Figure 1: A Bayesian network modeling brightness constancy in visual perception, a possible inverse
factorization, and two of the local joint distributions that determine the inverse conditionals.
ally and actually learning the inverse conditionals needed to invert the model. For example, consider
the Bayesian network shown in Figure 1. In addition to the default ?forward? factorization shown on
the left, we can consider an ?inverse? factorization shown on the right. Knowing the conditionals for
this inverse factorization would allow us to rapidly sample the latent variables given an observation.
In this paper, we will explore what these factorizations look like for Bayesian networks, how to learn
them, and how to use them to construct block proposals for MCMC.
2
Inverse factorizations
Let p be a distribution on latent variables x = (x1 , . . . , xm ) and observed variables y =
(y1 , . . . , yn ). A Bayesian network G is a directed acyclic graph that expresses a factorization of
this joint distribution in terms of the distribution of each node conditioned on its parents in the
graph:
n
m
Y
Y
p(yj |paG (yj ))
p(xi |paG (xi ))
p(x, y) =
j=1
i=1
When interpreted as a generative (causal) model, the observations y typically depend on a non-empty
set of parents, but are not themselves parents of any nodes.
In general, a distribution can be represented using many different factorizations. We say that a
Bayesian network H expresses an inverse factorization of p if the observations y do not have parents
(but may themselves be parents of some xi ):
p(x, y) = p(y)
m
Y
p(xi |paH (xi ))
i=1
As an example, consider the forward and inverse networks shown in Figure 1. We call the conditional
distributions p(xi |paH (xi )) stochastic inverses, with inputs paH (xi ) and output xi . If we could
sample from these distributions, we could produce samples from p(x|y) for arbitrary y, which solves
the problem of inference for all queries with the same set of observation nodes.
In general, there are many possible inverse factorizations. For each latent node, we can find a
factorization such that this node does not have children. This fact will be important in Section 4
when we resample subsets of inverse graphs. Algorithm 1 gives a heuristic method for computing
an inverse factorization given Bayes net G, observation nodes y, and desired leaf node xi . We
compute an ordering on the nodes of the original Bayes net from observations to leaf node. We then
add the nodes in order to the inverse graph, with dependencies determined by the graph structure of
the original network.
In the setting of amortized inference, past tasks provide approximate posterior samples for the corresponding observations. We therefore investigate learning inverses from such samples, and ways of
using approximate stochastic inverses for improving the efficiency of solving future inference tasks.
2
Algorithm 1: Heuristic inverse factorization
Input: Bayesian network G with latent nodes x and observed nodes y; desired leaf node xi
Output: Ordered inverse graph H
1: order x such that nodes close to y are first, leaf node xi is last
2: initialize H to empty graph
3: add nodes y to H
4: for node xj in x do
5:
add xj to H
6:
set paH (xj ) to a minimal set of nodes in H that d-separates xj from the remainder of H
based on the graph structure of G
7: end for
3
Learning stochastic inverses
It is easy to see that we can estimate conditional distributions p(xi |paH (xi )) using samples S drawn
from the prior p(x, y). For simplicity, consider discrete variables and an empirical frequency estimator:
(s)
(s)
|{s ? S : xi ? paH (xi )}|
?S (xi |paH (xi )) =
(s)
|{s ? S : paH (xi )|
Because ?S is a consistent estimator of the probability of each outcome for each setting of the parent
variables, the following theorem follows immediately from the strong law of large numbers:
Theorem 1. (Learning from prior samples) Let H be an inverse factorization. For samples S drawn
from p(x, y), ?S (xi |paH (xi )) ? p(xi |paH (xi )) almost surely as |S| ? ?.
Samples generated from the prior may be sparse in regions that have high probability under the posterior, resulting in slow convergence of the inverses. We now show that valid inverse factorizations
allow us to learn from posterior samples as well.
Theorem 2. (Learning from posterior samples) Let H be an inverse factorization. For samples
S drawn from p(x|y), ?(xi |paH (xi )) ? p(xi |paH (xi )) almost surely as |S| ? ? for values of
paH (xi ) that have positive probability under p(x|y).
Proof. For values paH (xi ) that are not in the support of p(x|y), ?(xi |paH (xi )) is undefined. For
values paH (xi ) in the support, ?(xi |paH (xi )) ? p(xi |paH (xi ), y) almost surely. By definition,
any node in a Bayesian network is independent of its non-descendants given its parent variables.
The nodes y are root nodes in H and hence do not descend from xi . Therefore, p(xi |paH (xi ), y) =
p(xi |paH (xi )) and the theorem holds.
Theorem 2 implies that we can use posterior samples from one observation set to learn inverses that
apply to all other observation sets?while samples from p(x|y) only provide global estimates for the
given posterior, it is guaranteed that the local estimates created by the procedure above are equivalent
to the query-independent conditionals p(xi |paH (xi )). In addition, we can combine samples from
distributions conditioned on several different observation sets to produce more accurate estimates of
the inverse conditionals.
In the discussion above, we can replace ? with any consistent estimator of p(xi |paH (xi )). We
can also trade consistency for faster learning and generalization. This framework can make use of
any supervised machine learning technique that supports sampling from a distribution on predicted
outputs. For example, for discrete variables we can employ logistic regression, which provides fast
generalization and efficient sampling, but cannot, in general, represent the posterior exactly. Our
choice of predictor can be data-dependent?for example, we can add interaction terms to a logistic
regression predictor as more data becomes available.
For continuous variables, consider a predictor based on k-nearest neighbors that produces samples
as follows (Algorithm 2): Given new input values z, retrieve the k previously observed input-output
3
Algorithm 2: K-nearest neighbor density predictor
Input: Variable index i, inverse inputs z, samples S, number of neighbors k
Output: Sampled value for node xi
(1)
(k)
1: retrieve k nearest pairs (z (1) , xi ), . . . , (z (k) , xi ) in S based on distance to z
(1)
(k)
2: construct density estimate q on xi , . . . , xi
3: sample from q
pairs that are closest to the current input values. Then, use a consistent density estimator to construct a density estimate on the nearby previous outputs and sample an output xi from the estimated
distribution.
Showing that this estimator converges to the true conditional density p(x|z) is more subtle. If the
conditional densities are smooth in the sense that:
?? > 0 ?? > 0 : ?z1 , z2 d(z1 , z2 ) < ? ? DKL (p(x|z1 ), p(x|z2 )) < ?
then we can achieve any desired accuracy of approximation by assuring that the nearest neighbors
used all lie within a ?-ball, but that the number of neighbors goes to infinity. We can achieve this
by increasing k slowly enough in |S|. The exact rate at which we may increase depends on the
distribution and may be difficult to determine.
4
Inverse MCMC
We have described how to compute the structure of inverse Bayes nets, and how to learn the associated conditional distributions and densities from prior and posterior samples. This produces fast, but
possibly biased recognition models. To get a consistent estimator, we use these recognition models as part of a Metropolis-Hastings scheme that, as the amount of training data grows, converges
to Gibbs sampling for proposals of size 1, to blocked-Gibbs for larger proposals, and to perfect
posterior sampling for proposals of size |G|.
We propose the following Inverse MCMC procedure (Algorithm 3): Offline, use Algorithm 1 to
compute an inverse graph for each latent node and train each local inverse in this graph from (posterior or prior) samples. Online, run Metropolis-Hastings with the proposal mechanism shown in
Algorithm 4, which resamples a set of up to k variables using the trained inverses1 . With little training data, we will want to make small proposals (small k) in order to achieve a reasonable acceptance
rate; with more training data, we can make larger proposals and expect to succeed.
Theorem 3. Let G be a Bayesian network, let ? be a consistent estimator (for inverse conditionals),
let {Hi }i?1..m be a collection of inverse graphs produced using Algorithm 1, and assume a source
of training samples (prior or posterior) with full support. Then, as training set size |S| ? ?,
Inverse MCMC with proposal size k converges to block-Gibbs sampling where blocks are the last k
nodes in each Hi . In particular, it converges to Gibbs sampling for proposal size k = 1 and to exact
posterior sampling for k = |G|.
Proof. We must show that proposals are made from the conditional posterior in the limit of large
training data. Fix an inverse H, and let x be the last k variables in H. Let paH (x) be the union of
H-parents of variables in x that are not themselves in x. By construction according to Algorithm
1, paH (x) form a Markov blanket of x (that is, x is conditionally independent of other variables
in G, given paH (x)). Now the conditional distribution over x factorizes along the inverse graph:
Q|H|
p(x|paH (x)) = i=k p(xi |paH (xi )). But by theorems 1 and 2, the estimators ? converge, when
they are defined, to the corresponding conditional distributions, ?(xi |paH (xi )) ? p(xi |paH (xi ));
since we assume full support, ?(xi |paH (xi )) is defined wherever p(xi |paH (xi )) is defined. Hence,
using the estimated inverses to sequentially sample the x variables results, in the limit, in samples
from the conditional distribution given remaining variables. (Note that, in the limit, these proposals
will always be accepted.) This is the definition of block-Gibbs sampling. The special cases of k = 1
(Gibbs) and k = |G| (posterior sampling) follow immediately.
1
In a setting where we only ever resample up to k variables, we only need to estimate the relevant inverses,
i.e., not all conditionals for the full inverse graph.
4
Algorithm 3: Inverse MCMC
Algorithm 4: Inverse MCMC proposer
Input: Prior or posterior samples S
Output: Samples x(1) , . . . , x(T )
Offline (train inverses):
1: for i in 1 . . . m do
2:
Hi ? from Algorithm 1
3:
for j in 1 . . . m do
4:
train inverse ?S (xj |paHi (xj ))
5:
end for
6: end for
Online (MH with inverse proposals):
1: for t in 1 . . . T do
2:
x0 , pfw , pbw from Algorithm 4
3:
x ? x0 with MH acceptance rule
4: end for
Input: State x, observations y, ordered inverse
graphs {Hi }i?1..m , proposal size kmax , inverses ?
Output: Proposed state x0 , forward and backward probabilities pfw and pbw
1: H ? Uniform({Hi }i?1..m )
2: k ? Uniform({0, 1, . . . , kmax ? 1})
3: x0 ? x
4: pfw , pbw ? 0
5: for j in n ? k, . . . , n do
6:
let xl be jth variable in H
7:
x0l ? ?(xl |paH (x0l ))
8:
pfw ? pfw ? p? (x0l |paH (x0l ))
9:
pbw ? pbw ? p? (xl |paH (xl ))
10: end for
Instead of learning the k=1 ?Gibbs? conditionals for each inverse graph, we can often precompute
these distributions to ?seed? our sampler. This suggests a bootstrapping procedure for amortized
inference on observations y (1) , . . . , y (t) : first, precompute the ?Gibbs? distributions so that k=1
proposals will be reasonably effective; then iterate between training on previously generated approximate posterior samples and doing inference on the next observation. Over time, increase the
size of proposals, possibly depending on acceptance ratio or other heuristics.
For networks with near-deterministic dependencies, Gibbs may be unable to generate training samples of sufficient quality. This poses a chicken-and-egg problem: we need a sufficiently good posterior sampler to generate the data required to train our sampler. To address this problem, we propose
a simple annealing scheme: We introduce a temperature parameter t that controls the extent to
which (almost-)deterministic dependencies in a network are relaxed. We produce a sequence of
trained samplers, one for each temperature, by generating samples for a network with temperature
ti+1 using a sampler trained on approximate samples for the network with next-higher temperature
ti . Finally, we discard all samplers except for the sampler trained on the network with t = 0, the
network of interest.
In the next section, we explore the practicality of such bootstrapping schemes as well as the general
approach of Inverse MCMC.
5
Experiments
We are interested in networks such that (1) there are many layers of nodes, with some nodes far
removed from the evidence, (2) there are many observation nodes, allowing for a variety of queries,
and (3) there are strong dependencies, making local Gibbs moves challenging.
We start by studying the behavior of the Inverse MCMC algorithm with empirical frequency estimator on a 225-node rectangular grid network from the UAI 2008 inference competition. This network
has binary nodes and approximately 50% deterministic dependencies, which we relax to dependencies with strength .99. We select the 15 nodes on the diagonal as observations and remove any nodes
below, leaving a triangular network with 120 nodes and treewidth 15 (Figure 2). We compute the
true marginals P ? using IJGP (Mateescu et al., 2010), and calculate the error of our estimates P s as
error =
N
1 X 1 X
|P ? (Xi = xi ) ? P s (Xi = xi )|.
N i=1 |Xi |
xi ?Xi
We generate 20 inference tasks as sources of training samples by sampling values for the 15 observation nodes uniformly at random. We precompute the ?final? inverse conditionals as outlined above,
producing a Gibbs sampler when k=1. For each inference task, we use this sampler to generate 105
approximate posterior samples.
5
0.08
0.04
0.00
0
10
20
30
40
50
60
Time (seconds)
Figure 2: Schema of the Bayes
net structure used in experiment 1. Thick arrows indicate almost-deterministic dependencies, shaded nodes are
observed. The actual network
has 15 layers with a total of
120 nodes.
Inverses (kNN)
Error in marginals
0.30
0.20
0.10
0.00
Error in marginals
Gibbs
Inverses (10x10)
Inverses (10x100)
Inverses (10x1000)
Figure 3: The effect of training on approximate posterior
samples for 10 inference tasks.
As the number of training samples per task increases, Inverse MCMC with proposals
of size 20 performs new inference tasks more quickly.
1e+01
1e+02
1e+03
1e+04
1e+05
Number of training samples
Figure 4: Learning an inverse
distribution for the brightness
constancy model (Figure 1)
from prior samples using the
KNN density predictor. More
training samples result in better estimates after the same
number of MCMC steps.
Figures 3 and 5 show the effect of training the frequency estimator on 10 inference tasks and testing
on a different task (averaged over 20 runs). Inverse proposals of (up to) size k=20 do worse than
pure Gibbs sampling with little training (due to higher rejection rate), but they speed convergence as
the number of training samples increases. More generally, large proposals are likely to be rejected
without training, but improve convergence after training.
Figure 6 illustrates how the number of inference tasks influences error and MH acceptance ratio in
a setting where the total number of training samples is kept constant. Surprisingly, increasing the
number of training tasks from 5 to 15 has little effect on error and acceptance ratio for this network.
That is, it seems relatively unimportant which posterior the training samples are drawn from; we
may expect different results when posteriors are more sparse.
Figure 7 shows how different sources of training data affect the quality of the trained sampler (averaged over 20 runs). As the strength of near-deterministic dependencies increases, direct training
on Gibbs samples becomes infeasible. In this regime, we can still train on prior samples and on
Gibbs samples for networks with relaxed dependencies. Alternatively, we can employ the annealing scheme outlined in the previous section. In this example, we take the temperature ladder to be
[.2, .1, .05, .02, .01, 0]?that is, we start by learning inverses for the relaxed network where all CPT
probabilities are constrained to lie within [.2, .8]; we then use these inverses as proposers for MCMC
inference on a network constrained to CPT probabilities in [.1, .9], learn the corresponding inverses,
and continue, until we reach the network of interest (at temperature 0).
While the empirical frequency estimator used in the above experiments provides an attractive asymptotic convergence guarantee (Theorem 3), it is likely to generalize slowly from small amounts of
training data. For practical purposes, we may be more interested in getting useful generalizations
quickly than converging to a perfect proposal distribution. Fortunately, the Inverse MCMC algorithm can be used with any estimator for local conditionals, consistent or not. We evaluate this idea
on a 12-node subset of the network used in the previous experiments. We learn complete inverses,
resampling up to 12 nodes at once. We compare inference using a logistic regression estimator with
L2 regularization (with and without interaction terms) to inference using the empirical frequency
estimator. Figure 9 shows the error (integrated over time to better reflect convergence speed) against
the number of training examples, averaged over 300 runs. The regression estimator with interaction
terms results in significantly better results when training on few posterior samples, but is ultimately
overtaken by the consistent empirical estimator.
Next, we use the KNN density predictor to learn inverse distributions for the continuous Bayesian
network shown in Figure 1. To evaluate the quality of the learned distributions, we take 1000
6
Error in marginals
Acceptance ratio
0.18
25
0.14
20
0.12
15
0.10
0.08
10
0.06
5
2
3
0.9
25
0.8
0.7
20
0.6
0.5
15
0.4
10
0.3
0.2
5
0.04
1
1.0
30
0.16
Maximum proposal size
Maximum proposal size
30
4
1
Log10(training samples per task)
2
3
4
Log10(training samples per task)
Figure 5: Without training, big inverse proposals result in high error, as they are unlikely to be
accepted. As we increase the number of approximate posterior samples used to train the MCMC
sampler, the acceptance probability for big proposals goes up, which decreases overall error.
Acceptance ratio
Error by training source
Maximum proposal size
30
0.95
Determinism 0.95
Annealing
Relaxed Gibbs
Gibbs
Prior
0.90
25
0.85
20
0.80
0.75
15
0.65
0.60
5
0.55
5
10
?
?
?
Determinism 0.9999
Annealing
Relaxed Gibbs
Gibbs
Prior
0.70
10
?
?
?
?
?
0.05
0.10
0.15
0.20
0.25
15
Test error (after 10s)
Number of tasks
Figure 6: For the network under consideration,
increasing the number of tasks (i.e., samples
for other observations) we train on has little effect on acceptance ratio (and error) if we keep
the total number of training samples constant.
Figure 7: For networks without hard determinism, we can train on Gibbs samples. For others, we can use prior samples, Gibbs samples
for relaxed networks, and samples from a sequence of annealed Inverse samplers.
samples using Inverse MCMC and compare marginals to a solution computed by JAGS (Plummer
et al., 2003). As we refine the inverses using forward samples, the error in the estimated marginals
decreases towards 0, providing evidence for convergence towards a posterior sampler (Figure 4).
To evaluate Inverse MCMC in more breadth, we run the algorithm on all binary Bayes nets with
up to 500 nodes that have been submitted to the UAI 08 inference competition (216 networks).
Since many of these networks exhibit strong determinism, we train on prior samples and apply
the annealing scheme outlined above to generate approximate posterior samples. For training and
testing, we use the evidence provided with each network. We compute the error in marginals as
described above for both Gibbs (proposal size 1) and Inverse MCMC (maximum proposal size 20).
To summarize convergence over the 1200s of test time, we compute the area under the error curves
(Figure 8). Each point represents a single run on a single model. We label different classes of
networks. For the grid networks, grid-k denotes a network with k% deterministic dependencies.
While performance varies across network classes?with extremely deterministic networks making
the acquisition of training data challenging?the comparison with Gibbs suggests that learned block
proposals frequently help.
Overall, these results indicate that Inverse MCMC is of practical benefit for learning block proposals
in reasonably large Bayes nets and using a realistic amount of training data (an amount that might
result from amortizing over five or ten inferences).
7
0.0
0.1
0.4
0.2
0.3
0.4
0.3
0.2
10
Gibbs error integral
20
50
100
200
500
Number of training samples
Figure 8: Each mark represents a single run of
a model from the UAI 08 inference competition. Marks below the line indicate that integrated error over 1200s of inference is lower
for Inverse MCMC than Gibbs sampling.
6
0.1
? ??
??
?
???
??
? ????
?
????
?
?
???
? ??
? ?
?
?
??
?
?
?????
?
?
??? ?
??
?
?
?
?
?
?
? ?
? ??????
? ??
? ?
Frequency estimator
Logistic regression (L2)
Logistic regression (L2 + ^2)
0.0
?
Error integral (1s)
0.4
0.3
0.2
0.1
grid?50
grid?75
grid?90
students
fs
bn2o
0.0
Inverse MCMC error integral
?
Figure 9: Integrated error (over 1s of inference)
as a function of the number of samples used
to train inverses, comparing logistic regression
with and without interaction terms to an empirical frequency estimator.
Related work
A recognition network (Morris, 2001) is a multilayer perceptron used to predict posterior marginals.
In contrast to our work, a single global predictor is used instead of small, compositional prediction
functions. By learning local inverses our technique generalizes in a more fine-grained way, and can
be combined with MCMC to provide unbiased samples. Adaptive MCMC techniques such as those
presented in Roberts and Rosenthal (2009) and Haario et al. (2006) are used to tune parameters
of MCMC algorithms, but do not allow arbitrarily close adaptation of the underlying model to the
posterior, whereas our method is designed to allow such close approximation. A number of adaptive
importance sampling algorithms have been proposed for Bayesian networks, including Shachter
and Peot (1989), Cheng and Druzdzel (2000), Yuan and Druzdzel (2012), Yu and Van Engelen
(2012), Hernandez et al. (1998), Salmeron et al. (2000), and Ortiz and Kaelbling (2000). These
techniques typically learn Bayes nets which are directed ?forward?, which means that the conditional
distributions must be learned from posterior samples, creating a chicken-and-egg problem. Because
our trained model is directed ?backwards?, we can learn from both prior and posterior samples.
Gibbs sampling and single-site Metropolis-Hastings are known to converge slowly in the presence
of determinism and long-range dependencies. It is well-known that this can be addressed using block
proposals, but such proposals typically need to be built manually for each model. In our framework,
block proposals are learned from past samples, with a natural parameter for adjusting the block size.
7
Conclusion
We have described a class of algorithms, for the setting of amortized inference, based on the idea
of learning local stochastic inverses?the information necessary to ?run a model backward?. We
have given simple methods for estimating and using these inverses as part of an MCMC algorithm.
In exploratory experiments, we have shown how learning from past inference tasks can reduce the
time required to estimate quantities of interest. Much remains to be done to explore this framework. Based on our results, one particularly promising avenue is to explore estimators that initially
generalize quickly (such as regression), but back off to a sound estimator as the training data grows.
Acknowledgments
We thank Ramki Gummadi and anonymous reviewers for useful comments. This work was supported by a John S. McDonnell Foundation Scholar Award.
8
References
J. Cheng and M. Druzdzel. AIS-BN: An adaptive importance sampling algorithm for evidential
reasoning in large bayesian networks. Journal of Artificial Intelligence Research, 2000.
H. Haario, M. Laine, A. Mira, and E. Saksman. DRAM: efficient adaptive MCMC. Statistics and
Computing, 16(4):339?354, 2006.
L. D. Hernandez, S. Moral, and A. Salmeron. A Monte Carlo algorithm for probabilistic propagation in belief networks based on importance sampling and stratified simulation techniques.
International Journal of Approximate Reasoning, 18(1):53?91, 1998.
B. K. Horn. Understanding image intensities. Artificial intelligence, 8(2):201?231, 1977.
R. Mateescu, K. Kask, V. Gogate, and R. Dechter. Join-graph propagation algorithms. Journal of
Artificial Intelligence Research, 37(1):279?328, 2010.
Q. Morris. Recognition networks for approximate inference in BN20 networks. Morgan Kaufmann
Publishers Inc., Aug. 2001.
L. E. Ortiz and L. P. Kaelbling. Adaptive importance sampling for estimation in structured domains. In Proc. of the 16th Ann. Conf. on Uncertainty in A.I. (UAI-00), pages 446?454. Morgan
Kaufmann Publishers, 2000.
M. Plummer et al. Jags: A program for analysis of bayesian graphical models using gibbs sampling.
URL http://citeseer. ist. psu. edu/plummer03jags. html, 2003.
G. Roberts and J. Rosenthal. Examples of adaptive MCMC. Journal of Computational and Graphical Statistics, 18(2):349?367, 2009.
A. Salmeron, A. Cano, and S. Moral. Importance sampling in Bayesian networks using probability
trees. Computational Statistics and Data Analysis, 34(4):387?413, Oct. 2000.
A. N. Sanborn, V. K. Mansinghka, and T. L. Griffiths. Reconciling intuitive physics and Newtonian
mechanics for colliding objects. Psychological Review, 120(2):411, Apr. 2013.
R. D. Shachter and M. A. Peot. Simulation approaches to general probabilistic inference on belief
networks. In Proc. of the 5th Ann. Conf. on Uncertainty in A.I. (UAI-89), pages 311?318, New
York, NY, 1989. Elsevier Science.
K. Watanabe and S. Shimojo. When sound affects vision: effects of auditory grouping on visual
motion perception. Psychological Science, 12(2):109?116, 2001.
D. Wingate and T. Weber. Automated variational inference in probabilistic programming. arXiv
preprint arXiv:1301.1299, 2013.
H. Yu and R. A. Van Engelen. Refractor importance sampling. arXiv preprint arXiv:1206.3295,
2012.
C. Yuan and M. J. Druzdzel. Importance sampling in Bayesian networks: An influence-based approximation strategy for importance functions. arXiv preprint arXiv:1207.1422, 2012.
9
| 4966 |@word seems:1 simulation:2 bn:1 citeseer:1 brightness:2 offload:1 past:3 current:1 z2:3 comparing:1 yet:1 must:2 john:1 dechter:1 realistic:1 remove:1 designed:1 resampling:1 generative:1 leaf:4 intelligence:3 haario:3 provides:2 node:39 five:1 along:1 direct:1 descendant:1 yuan:2 combine:1 introduce:1 x0:4 peot:2 rapid:3 behavior:1 themselves:3 frequently:1 mechanic:1 brain:1 little:4 actual:1 increasing:3 becomes:2 provided:1 estimating:1 underlying:1 what:1 kind:1 interpreted:1 bootstrapping:2 guarantee:1 ti:2 precomputation:2 exactly:1 control:1 medical:1 yn:1 producing:1 before:2 positive:1 understood:1 local:8 limit:3 hernandez:2 approximately:1 might:1 suggests:2 challenging:3 shaded:1 factorization:19 stratified:1 range:2 averaged:3 directed:4 acknowledgment:1 horn:2 practical:2 yj:2 testing:2 union:1 block:10 procedure:4 area:1 empirical:6 significantly:1 word:1 griffith:1 suggest:1 get:1 cannot:1 close:3 kmax:2 influence:2 equivalent:1 deterministic:7 reviewer:1 annealed:1 go:2 focused:1 rectangular:1 simplicity:1 immediately:2 pure:1 estimator:20 rule:1 retrieve:2 exploratory:1 construction:1 assuring:1 exact:2 programming:1 us:1 amortized:6 expensive:1 recognition:6 particularly:1 observed:4 constancy:2 preprint:3 solved:1 wingate:2 descend:1 calculate:1 region:1 ordering:1 trade:1 removed:1 decrease:2 substantial:1 ultimately:1 trained:6 raise:1 depend:1 solving:1 efficiency:2 joint:3 mh:3 represented:1 x100:1 train:10 fast:2 describe:2 effective:1 plummer:2 monte:1 query:4 artificial:3 outcome:1 heuristic:3 stanford:2 solve:1 larger:2 say:1 relax:1 triangular:1 statistic:3 knn:3 final:1 online:3 sequence:2 net:8 propose:2 interaction:4 adaptation:5 remainder:1 relevant:1 rapidly:1 jag:2 achieve:3 description:1 intuitive:1 competition:3 getting:1 saksman:1 invest:1 parent:8 empty:2 convergence:8 produce:6 generating:1 perfect:2 converges:4 newtonian:1 object:2 help:1 depending:1 pose:1 nearest:4 mansinghka:1 aug:1 strong:3 solves:2 predicted:1 implies:1 blanket:1 treewidth:1 indicate:3 thick:1 stochastic:7 human:3 fix:1 generalization:3 scholar:1 anonymous:1 hold:1 sufficiently:1 seed:1 cognition:1 predict:1 resample:2 purpose:1 estimation:1 proc:2 label:1 mit:1 gaussian:2 always:1 factorizes:1 contrast:2 sense:1 elsevier:1 inference:42 dependent:1 typically:3 integrated:3 unlikely:1 initially:1 x0l:4 interested:2 overall:2 html:1 overtaken:1 constrained:2 special:1 initialize:1 construct:3 once:1 psu:1 sampling:24 manually:1 represents:2 look:1 yu:2 future:1 others:1 few:2 employ:2 gamma:2 ortiz:3 jessica:1 acceptance:9 interest:3 investigate:1 undefined:1 accurate:1 integral:3 necessary:1 tree:1 taylor:1 desired:3 causal:1 minimal:1 psychological:2 modeling:1 kaelbling:3 subset:2 hundred:1 predictor:7 uniform:2 graphic:1 dependency:11 answer:1 varies:1 combined:1 density:9 international:1 probabilistic:7 physic:2 off:1 quickly:4 reflect:1 x1000:1 slowly:3 possibly:2 worse:1 cognitive:1 creating:1 conf:2 leading:1 amortizing:1 student:1 inc:1 depends:1 root:2 view:1 doing:1 schema:1 start:2 bayes:9 accuracy:1 kaufmann:2 generalize:2 bayesian:16 produced:1 carlo:1 submitted:1 evidential:1 reach:1 definition:2 against:1 acquisition:1 frequency:7 proof:2 associated:1 sampled:1 auditory:1 pah:34 adjusting:1 subtle:1 actually:1 back:1 higher:2 supervised:1 follow:1 response:1 done:1 rejected:1 druzdzel:5 until:1 hastings:4 ally:1 propagation:2 logistic:6 quality:3 grows:2 effect:5 true:2 unbiased:1 hence:2 regularization:1 conditionally:1 attractive:1 complete:1 performs:1 motion:1 temperature:6 reasoning:2 cano:1 weber:2 resamples:1 consideration:1 image:1 recently:1 variational:1 interpretation:1 marginals:8 accumulate:1 ijgp:1 blocked:1 kask:1 gibbs:27 ai:1 consistency:1 grid:6 similarly:1 outlined:3 robot:1 add:4 posterior:33 closest:1 discard:1 binary:2 continue:1 arbitrarily:1 morgan:2 fortunately:1 relaxed:6 surely:3 converge:3 determine:2 full:3 sound:2 x10:1 smooth:1 faster:1 adapt:1 long:1 gummadi:1 dkl:1 award:1 converging:1 prediction:1 regression:8 multilayer:1 vision:2 arxiv:6 iteration:1 represent:1 invert:2 robotics:1 chicken:2 proposal:32 addition:2 conditionals:10 want:1 fine:1 annealing:5 whereas:1 addressed:1 source:4 leaving:1 publisher:2 goodman:1 biased:1 comment:1 tend:1 call:2 near:2 presence:1 backwards:4 enough:2 easy:1 automated:1 variety:2 xj:6 iterate:1 psychology:1 affect:2 andreas:1 idea:2 reduce:1 knowing:1 avenue:1 bn20:1 url:1 moral:2 f:1 proceed:1 york:1 constitute:1 compositional:1 cpt:2 useful:3 generally:1 unimportant:1 tune:1 amount:4 ten:1 morris:2 generate:5 http:1 millisecond:1 upfront:1 estimated:4 rosenthal:3 per:3 diagnosis:1 discrete:2 express:2 ist:1 drawn:4 breadth:1 kept:1 backward:2 luminance:7 graph:16 asymptotically:1 laine:1 run:8 inverse:85 uncertainty:2 almost:5 reasonable:2 proposer:2 layer:2 hi:5 guaranteed:1 cheng:3 encountered:1 refine:1 strength:2 noah:1 infinity:1 scene:1 colliding:1 nearby:1 speed:3 extremely:2 relatively:1 department:2 structured:1 according:1 ball:1 precompute:3 mcdonnell:1 across:2 metropolis:4 wherever:1 making:2 computationally:1 previously:2 remains:1 turn:2 mechanism:1 needed:1 end:5 studying:1 available:2 generalizes:1 apply:2 original:2 denotes:1 running:1 remaining:1 reconciling:1 graphical:2 log10:2 reflectance:4 practicality:1 build:1 move:1 question:2 quantity:1 strategy:1 diagonal:1 exhibit:2 sanborn:2 distance:1 separate:1 unable:1 thank:1 extent:1 index:1 gogate:1 ratio:6 providing:1 difficult:1 robert:3 expense:1 dram:1 allowing:1 observation:22 markov:1 ever:1 y1:1 arbitrary:1 intensity:1 inverting:1 pair:2 required:2 z1:3 coherent:1 learned:4 address:1 usually:1 perception:3 xm:1 below:2 regime:2 summarize:1 program:3 built:1 including:2 explanation:1 belief:2 natural:1 scheme:5 improve:1 ladder:1 created:1 prior:15 understanding:1 l2:3 review:1 asymptotic:1 law:1 expect:2 acyclic:1 foundation:1 sufficient:1 consistent:7 mateescu:2 surprisingly:1 last:3 supported:1 jth:1 infeasible:1 offline:2 allow:4 deeper:1 perceptron:1 wide:1 neighbor:5 taking:2 pag:2 sparse:2 determinism:5 benefit:1 van:2 curve:1 default:1 valid:1 forward:5 commonly:1 adaptive:7 collection:1 made:1 far:1 approximate:11 keep:1 global:2 sequentially:1 uai:5 shimojo:2 xi:68 alternatively:1 continuous:2 latent:5 stuhlmuller:1 promising:1 learn:10 reasonably:2 improving:1 complex:1 domain:1 apr:1 arrow:1 big:2 reconcile:1 child:1 x1:1 site:1 join:1 egg:2 slow:1 ny:1 watanabe:2 mira:1 xl:4 lie:2 perceptual:1 grained:1 theorem:9 showing:1 evidence:4 grouping:1 importance:8 illumination:4 conditioned:2 illustrates:1 rejection:1 likely:3 explore:5 shachter:2 visual:2 ordered:2 succeed:1 conditional:13 oct:1 ann:2 towards:2 shared:1 replace:1 considerable:1 hard:1 determined:1 except:1 uniformly:1 sampler:15 pfw:5 total:3 pas:1 accepted:2 select:1 support:6 mark:2 evaluate:3 mcmc:26 scratch:1 |
4,383 | 4,967 | Approximate Gaussian process inference for the drift
of stochastic differential equations
Andreas Ruttor
Computer Science, TU Berlin
[email protected]
Philipp Batz
Computer Science, TU Berlin
[email protected]
Manfred Opper
Computer Science, TU Berlin
[email protected]
Abstract
We introduce a nonparametric approach for estimating drift functions in systems
of stochastic differential equations from sparse observations of the state vector.
Using a Gaussian process prior over the drift as a function of the state vector, we
develop an approximate EM algorithm to deal with the unobserved, latent dynamics between observations. The posterior over states is approximated by a piecewise
linearized process of the Ornstein-Uhlenbeck type and the MAP estimation of the
drift is facilitated by a sparse Gaussian process regression.
1
Introduction
Gaussian process (GP) inference methods have been successfully applied to models for dynamical
systems, see e.g. [1?3]. Usually, these studies have dealt with discrete time dynamics, where one
uses a GP prior for modeling transition function and the measurement function of the system. On
the other hand, many dynamical systems in the physical world evolve in continuous time and the
noisy dynamics is described naturally in terms of stochastic differential equations (SDE). SDEs
have also attracted considerable interest in the NIPS community in recent years [4?7]. So far most
inference approaches have dealt with the posterior prediction of state variables between observations
(smoothing) and the estimation of parameters contained in the drift function, which governs the
deterministic part of the microscopic time evolution. Since the drift is usually a nonlinear function
of the state vector, a nonparametric estimation using Gaussian process priors would be a natural
choice, when a large number of data is available. A recent result by [8, 9] presented an important
step in this direction. The authors have shown that GPs are a conjugate family to SDE likelihoods. In
fact, if an entire path of dense observations of the state dynamics is observed, the posterior process
over the drift is exactly a GP. Unfortunately, this simplicity is lost, when observations are not dense,
but separated by larger time intervals. In [8] this sparse, incomplete observation case has been
treated by a Gibbs sampler, which alternates between sampling complete state paths of the SDE and
creating GP samples for the drift. A nontrivial problem is the sampling from SDE paths conditioned
on observations. Second, the densely sampled hidden paths are equivalent to a large number of
imputed observations, for which the matrix inversions required by the GP posterior predictions can
become computationally costly. It was shown in [8] that in the univariate case for GP priors with
precision operators (the inverses of covariance kernels) which are differential operators efficient
predictions can be realized in terms of the solutions of differential equations.
In this paper, we develop an alternative approximate expectation maximization (EM) method for
inference from sparse observations, which is faster than the sampling approach and can also be
applied to arbitrary kernels and multivariate SDEs. In the E-Step we approximate expectations over
1
state paths by those of a locally fitted Ornstein-Uhlenbeck model. The M-step for computing the
maximum posterior GP prediction of the drift depends on a continuum of function values and is thus
approximated by a sparse GP.
The paper is organized as follows. Section 2 introduces stochastic differential equations and section
3 discusses GP based inference for completely observed paths. In section 4 our approximate EM
algorithm is derived and its performance is demonstrated on a variety of SDEs in section 6. Section
7 presents a discussion.
2
Stochastic differential equations
We consider continuous-time univariate Markov processes of the diffusion type, where the dynamics
of a d-dimensional state vector Xt ? Rd is given by the stochastic differential equation (SDE)
dXt = f (Xt )dt + D1/2 dW.
1
(1)
d
The vector function f (x) = (f (x), . . . , f (x)) defines the deterministic drift and W is a Wiener
process, which models additive white noise. D is the diffusion matrix, which we assume to be
independent of x. We will not attempt a rigorous treatment of probability measures over continuous
time paths here, but will mostly assume for our derivations that the process can be approximated
with a discrete time process Xt in the Euler-Maruyama discretization [10], where the times t ? G
are on a regular grid G = {0, ?t, 2?t, . . . } and where ?t is some small microscopic time. The
discretized process is given by
?
Xt+?t ? Xt = f (Xt )?t + D1/2 ?t ?t ,
(2)
where ?t ? N (0, I) is a sequence of i.i.d. Gaussian noise vectors. We will usually take the limit
?t ? 0 only in expressions where (Riemann) sums are over nonrandom quantities, i.e. where
expectations over paths have been carried out and can be replaced by ordinary integrals.
3
Bayesian Inference for dense observations
Suppose we observe a path of n d-dimensional observations X0:T = (Xt )t?G over the time interval
[0, T ]. Since for ?t ? 0, the transition probabilities of the process are Gaussian,
#
"
1 X
2
(3)
pf (X0:T |f ) ? exp ?
||Xt+?t ? Xt ? f (Xt )?t|| ,
2?t t
.
the probability density for the path with a given drift function f = (f (Xt ))t?G at these observations
can be written as the product
where
pf (X0:T |f ) = p0 (X0:T )L(X0:T |f ),
"
1 X
2
||Xt+?t ? Xt ||
p0 (X0:T ) ? exp ?
2?t t
(4)
#
(5)
is the measure over paths without drift, i.e. a discretized version of the Wiener measure, and a term
which we will call likelihood in the following,
"
#
X
1X
2
L(X0:T |f ) = exp ?
||f (Xt )|| ?t +
hf (Xt ), Xt+?t ? Xt i .
(6)
2 t
t
.
Here we have introduced the inner product hu, vi = u? D?1 v and the corresponding squared norm
2 .
? ?1
||u|| = u D u to avoid cluttered notation.
To attempt a nonparametric Bayesian estimate of the drift function f (x), we note that the exponent in (6) contains the drift f at most quadratically. Hence it becomes clear that a conjugate prior
to the drift for this model is given by a Gaussian process, i.e. we assume that for each component
f ? P0 (f ) = GP(0, K), where K is a kernel [11], a fact which was recently observed in [8]. We denote probabilities over the drift f by upper case symbols in order to avoid confusion with path probabilities. Although a more general model is possible, we will restrict ourselves to the case where the
2
Figure 1: The left figure shows a snippet of the double well sample path in black and observations
as red dots. The right picture displays the estimated drift function for the double well model after
initialization, where the red line denotes the true drift function and the black line the mean function
with corresponding 95%-confidence bounds (twice the standard deviation) in blue. One can clearly
see that the larger distance between the consecutive points leads to a wrong prediction.
GP priors over the components f j (x), j = 1, . . . , d of the drift are independent (with usually different kernels) and we assume that we have a diagonal diffusion matrix D = diag(?12 , . . . , ?d2 ). In this
case, the GP posteriors of f j (x) are independent, too, and we can estimate drift components indej
pendently by ordinary GP regression. We define data vectors by dj = ((Xt+?t
?Xtj )/?t)?
t?G\{T } ,
j
j
j
j
?
the kernel matrix K = (K (Xs , Xt ))s,t?G , and the test vector k (x) = (K (x, Xt ))t?G . Then a
standard calculation [11] shows that the posterior process over drift functions f has a posterior mean
and a GP posterior variance at an arbitrary point x is given by
!?1
!?1
2
2
?
?
j
j
I dj ,
I kj (x). (7)
?f2 j (x) = K j (x, x)?kj (x)? Kj +
f?j (x) = kj (x)? Kj +
?t
?t
Note that ?j2 /?t plays the role of the variance of the observation noise in the standard regression
case. In practice, the number of observations can be quite large for a fine time discretization, and a
fast computation of (7) could become infeasible. A possible way out of this problem?as suggested
by [8]?could be a restriction to kernels for which the inverse kernel, the precision operator, is a
differential operator. A well known machine learning approach, which is based on a sparse Gaussian
process approximation, applies to arbitrary kernels and generalizes easily to multivariate SDE. We
have resorted specifically to the optimal Kullback-Leibler sparsity [1,12], where the likelihood term
of a GP model is replaced by another effective likelihood, which depends only on a smaller set of
variables fs .
4
MAP Inference for sparse observations
The simple GP regression approach outlined in the previous section cannot be applied when obser.
vations are sparse in time. In this setting, we assume that n observations yk = X?k , k = 1, . . . , n
are obtained at (for simplicity) regular intervals ?k = k? , where ? ? ?t is much larger than the
microscopic time scale. In this case, a discretization in (6), where the sum over the microscopic grid
t ? G would be replaced by a sum over macroscopic times ?k and ?t by ? , would correspond to
a discrete time dynamical model of the form (1) again replacing ?t by ? . But this discretization
would give a bad approximation to the true SDE dynamics. The estimator of the drift would give
some (approximate) estimation of the mean of the transition kernel over macroscopic times ? . However, this does usually not give a good approximation for the original drift. This can be seen in figure
1, where the red line corresponds to the true drift (of the so called double-well model [4]) and the
black line to its prediction based on observations with ? = 0.2 and the naive estimation method.
To deal with this problem, we treat the process Xt for times t between consecutive observations
k? < t < (k + 1)? as a latent unobserved random variable with a posterior path measure given by
p(X0:T |y, f ) ? p(X0:T |f )
3
n
Y
k=1
?(yk ? Xk? ),
(8)
where y is the collection of observations yk and ?(?) denotes the Dirac-distribution encoding the
fact that the process is known perfectly at times ?k . Our goal is to use an EM algorithm to compute
the maximum posterior (MAP) prediction for the drift function f (x). Unfortunately, exact posterior
expectations are intractable and one needs to work with suitable approximations.
4.1
Approximate EM algorithm
The EM algorithm cycles between two steps
1. In the E-step, we compute the expected negative logarithm of the complete data likelihood
L(f , q) = ?Eq [ln L(X0:T |f )] ,
(9)
fnew = arg min (L(f , q) ? ln P0 (f )) .
(10)
where q denotes a measure over paths which approximates the intractable posterior
p(X0:T |y, fold ) for the previous estimate fold of the drift.
2. In the M-Step, we recompute the drift function as
f
To compute the expectation in the E-step, we use (6) and take the limit ?t ? 0 at the end, when
expectations have been computed. As f (x) is a time-independent function, this yields
1X
?Eq [ln L(X0:T |f )] = lim
Eq ||f (Xt )||2 ?t ? 2 hf (Xt ), Xt+?t ? Xt i
?t?0 2
t
Z
1 T
Eq ||f (Xt )||2 ? 2 hf (Xt ), gt (Xt )i dt
=
2 0
Z
Z
1
2
=
||f (x)|| A(x)dx ? hf (x), y(x)i dx.
(11)
2
Here qt (x) is the marginal density of Xt computed from the approximate posterior path measure q.
We have also defined the corresponding approximate posterior drift
1
Eq [Xt+?t ? Xt |Xt = x],
(12)
gt (x) = lim
?t?0 ?t
as well as the functions
Z T
Z T
gt (x)qt (x)dt.
(13)
qt (x)dt and y(x) =
A(x) =
0
0
There are two main problems for a practical realization of this EM algorithm:
1. We need to find tractable path measures q, which lead to good approximations for marginal
densities and posterior drifts given arbitrary prior drift functions f (x).
2. The M-Step requires a functional optimization, because (11) shows that L(f , q) ? ln P0 (f )
is actually a functional of f (x), i.e. it contains a continuum of values f (x), where x ? Rd .
4.2
Linear drift approximation: The Ornstein-Uhlenbeck bridge
For given drift f (?) and times t ? Ik in the interval Ik = [k ? ; (k + 1)? ] between two consecutive
observations, the exact posterior marginal pt (x) equals the density of Xt = x conditioned on the
fact that Xk? = yk and X(k+1)? = yk+1 . This can be expressed by the transition densities of the
homogeneous Markov diffusion process with drift f (x). We denote this quantity by ps (Xt+s |Xt )
being the density of the random variable Xt+s at time t + s conditioned on Xt at time t. Using the
Markov property, this yields the representation
pt (x) ? p(k+1)? ?t (yk+1 |x)pt?k? (x|yk ) for t ? Ik .
(14)
As functions of t and x, the second factor fulfills a forward Fokker-Planck equation and the first one
a Kolmogorov backward equation [13]. Both are partial differential equations. Since exact computations are not feasible for general drift functions, we approximate the transition density ps (x|xk ) in
each interval Ik by that of a process, where the drift f (x) is replaced by its local linearization
f (x) ? fou (x, t) = f (xk ) ? ?k (x ? xk )
4
with ?k = ??f (xk ).
(15)
This is equivalent to assuming that for t ? Ik the dynamics is approximated by the homogeneous
Ornstein-Uhlenbeck process [13]
dXt = [f (yk ) ? ?k (Xt ? yk )]dt + D1/2 dW,
(16)
which is also used to build computationally efficient hierarchical models [14, 15], as in this case
the marginal posterior can be calculated analytically. Here the transition density is a multivariate
Gaussian
qs(k) (x|y) = N x|?k + e??k s (y ? ?k ); Ss
(17)
?1
where ?k = yk + ??1
is calculated
k f (yk ) is the stationary mean and the variance Ss = As Bs
using the matrix exponential
0
As
?k
D
s
= exp
.
(18)
0 ???
Bs
I
k
(k)
Then we obtain the Gaussian approximation qt (x) = N (x|m(t); C(t)) of the marginal posterior
for t ? Ik by multiplying the two transition densities, where
?1
?
?1
??k (tk+1 ?t)
and
C(t) =
e??k (tk+1 ?t) St?1
e
+
S
t?tk
k+1 ?t
?
??k (tk+1 ?t)
y
?
?
+
e
?
m(t) = C(t) e??k (tk+1 ?t) St?1
k+1
k
k
?t
k+1
?1
??k (t?tk )
+ C(t) St?tk ?k + e
(yk ? ?k ) .
By inspecting mean and variance we see that the distribution is a equivalent to a bridge between the
points X = yk and X = yk+1 and collapses to point masses at these points.
Within this approximation, we can estimate parameters such as the diffusion D using the approximate evidence
n?1
Y
q?(k) (yk+1 |yk )
(19)
p(y|f ) ? pou (y) = p(x1 )
j=1
Finally, in this approximation we obtain for the posterior drift
1
gt (x) = lim
E [Xt+?t ? Xt |Xt = x, X? = yk+1 ]
?t?0 ?t
=
?
f (yk ) ? ?k (x ? yk ) + De??k (tk+1 ?t) St?1
(yk+1 ? ?k ? e??k (tk+1 ?t) (x ? ?k ))
k+1 ?t
as shown in appendix A in the supplementary material.
4.3
Sparse M-Step approximation
To cope with the functional optimization, we resort to a sparse approximation for replacing the
infinite set f by a sparse set fs . Here the GP posteriors (for each component of the drift) is replaced
by one that is closest in the KL sense. Following appendix B in the supplementary material, we find
that in the sparse approximation the likelihood (11) is replaced by
Z
Z
1
Ls (f , q) =
||E0 [f (x)|fs ]||2 A(x) dx ? hE0 [f (x)|fs ], y(x)i dx,
(20)
2
where the conditional expectation is over the GP prior. In order to avoid cluttered notation, it should
2
be noted that in the following results for a component f j , the quantities ?s , fs , ks , K?1
s , y(x), ? ,
similar to (7) depend on the component j, but not A(x).
This is easily computed as
?1
E0 [f (x)|fs ] = k?
s (x)Ks fs .
(21)
1 ?
f ?s f s ? fs? ds
2 s
(22)
Hence
Ls (f , q) =
with
1
?s = 2 K?1
s
?
Z
ks (x) A(x)
k?
s (x)dx
K?1
s ,
5
1
ds = 2 K?1
s
?
Z
ks (x) y(x) dx.
(23)
With these results, the approximate MAP estimate is
?1
f?s (x) = k?
ds .
s (x)(I + ?s Ks )
(24)
The integrals over x in (23) can be computed analytically for many kernels of interest such as
polynomial and RBF ones. However, we have done this for 1-dimensional models only. For higher
dimensions, we found it more efficient to treat both the time integration in (13) and the x integrals
by sampling, where time points t are drawn uniformly at random and x points from the multivariate
Gaussian qt (x).
s ?1
A related expression for the variance ?s2 (x) = K(x, x) ? k?
?s ks (x) can only be
s (x)(I + ?K )
viewed as a crude estimate, because it does not include the impact of the GP fluctuations on the path
probabilities.
5
A crude estimate of an approximation error
Unfortunately, there is no guarantee that this approximation to the EM algorithm will always increase the exact likelihood p(y|f ). Here, we will develop a crude estimate how p(y|f ) differs
.
from the the Ornstein-Uhlenbeck approximation (19) to lowest order in the difference ?f (Xt , t) =
f (Xt ) ? fou (Xt , t) between drift function and its approximation.
Our estimate is based on the exact expression
Z
n
Y
p(y|f ) = dp0 (X0:T ) eln L(X0:T |f )
?(yk ? Xk? )
(25)
k=1
where the Wiener measure p0 is defined in (5) and the likelihood L(X0:T |f ) in (6). The OrnsteinUhlenbeck approximation (19) can expressed in a similar way: we just have to replace L(X0:T |f )
by a functional Lou (X0:T |f ) which in turn is obtained by replacing f (Xt ) with the linearized drift
fou (Xt , t) in (6). The difference in free energies (negative log evidences) can be expressed exactly by an expectation over the posterior OU processes and then expanded (similar to a cumulant
expansion) in a Taylor series in ?L = ? ln(L/Lou ). The first two terms are given by
1
.
?F = ? {ln p(y|f ) ? ln pou (y)} = ? ln Eq e??L ? Eq [?L] ? Varq [?L] ? . . .
2
(26)
The computation of the first term is similar to (11) and requires only the marginal qt and the posterior
gt . The second term contains the posterior variance and requires two-time covariances of the OU
process. We concentrate on the first term which we further expand in the difference ?f (Xt , t). This
yields
Z T
Eq [h?f (Xt , t), fou (Xt , t) ? gt (Xt )i] dt.
(27)
?F ? Eq [?L] ?
0
This expression could be evaluated in order to estimate the influence of nonlinear parts of the drift
on the approximation error.
6
Experiments
In all experiments, we used different versions of the following general kernel, which is a linear
combination of a RBF and a polynomial kernel,
p
(x1 ? x2 )T (x1 ? x2 )
K(x1 , x2 ) = c ?RBF exp ?
,
(28)
+ (1 ? c) 1 + x?
1 x2
2
2lRBF
where the hyperparameters ?RBF and lRBF denote the variance and length scale of the RBF kernel
and p denotes the order of the polynomial kernel.
Also, we determined the sparse points for the GP algorithm in each case by first constructing a
histogram over the observations and then selecting the set of histogram midpoints of each histogram
bin which contained at least a certain number bmin of observations. In our experiments, we chose
bmin = 5.
6
Figure 2: The figures show the estimated drift functions for the double well model (left) and the
periodic diffusion model (right) after completion of the EM algorithm. Again, the black and blue
lines denote mean and 95%-confidence bounds, while the red lines indicate the true drift functions.
6.1
One-dimensional toy models
First we test our algorithm on two toy data sets, the double well model with dynamics given by the
SDE
dx = 4(x ? x3 )dt + dW
(29)
and a diffusion model driven by a periodic drift
dx = sin(x)dt + dW.
(30)
For both models, we simulated a path of size M = 105 on a regular grid with width ?t = 0.01 from
the corresponding SDE and kept every 20th sample point as observation, resulting in N = 5000
data points. We initialized the EM Algorithm by running the sparse GP for the observation points
without any imputation and subsequently computed the expectation operators by analytically evaluating the expressions on the same time grid as the simulated path and summing over the time steps.
An alternative initialization strategy which consists of generating a full trajectory of the same size as
the original path using Brownian bridge sampling between observations did not bring any noticeable
performance improvements. Since we cannot guarantee that the likelihood increases in every iteration due to the approximation in the E-step, we resort to a simple heuristic by assuming convergence
once L stabilizes up to some minor fluctuation. In our experiments convergence was typically attained after a few (< 10) iterations. For the double well model we used an equal weighting c = 0.5
between kernels with hyperparameters ?RBF = 1, lRBF = 0.5 and p = 5, whereas for the periodic
model we used an RBF kernel (c = 1) with the same values for ?RBF and lRBF .
6.2
Application to a real data set
As an example of a real world data set, we used the NGRIP ice core data (provided by NielsBohr institute in Copenhagen, http://www.iceandclimate.nbi.ku.dk/data/), which
provides an undisturbed ice core record containing climatic information stretching back into the
last glacial. Specifically, this data set as shown in figure 3 contains 4918 observations of oxygen
isotope concentration ? 18 O over a time period from the present to roughly 1.23 ? 105 years into
the past. Since there are generally less isotopes in ice formed under cold conditions, the isotope
concentration can be regarded as an indicator of past temperatures.
Recent research [16] suggest to model the rapid paleoclimatic changes exhibited in the data set
by a simple dynamical system with polynomial drift function of order p = 3 as canonical model
which allows for bistability. This corresponds to a meta stable state at higher temperatures close to
marginal stability and a stable state at low values, which is consistent with other research on this
data set, linking a stable state of oxygen isotopes to a baseline temperature and a region at higher
values corresponding to the occurrence of rapid temperature spikes. For this particular problem we
first tried to determine the diffusion constant ? of the data. Therefore we estimated the likelihood of
the data set for 40 fixed values of ? in an interval from 0.3 to 11.5 by running the EM algorithm with
a polynomial kernel (c = 0) of order p = 3 for each value in turn. The resulting drift function with
the highest likelihood is shown in figure 3. The result seems to confirm the existence of a metastable
state of oxygen isotope concentration and a stable state at lower values.
7
Figure 3: The figure on the left displays the NGRIP data set, while the picture on the right shows
the estimated drift in black with corresponding 95%-confidence bounds denoting twice the standard
deviation in blue for the optimal diffusion value ?
? = 2.9.
Figure 4: The left figure shows the empirical density for the two-dimensional model, together with
the vector fields of the actual drift function given in blue and the estimated drift given in red. The
right picture shows a snippet from the full sample in black together with the first 20 observations
denoted by red dots.
6.3
Two-dimensional toy model
As an example of a two dimensional system, we simulated from a process with the following SDE:
dx
dy
=
=
(x(1 ? x2 ? y 2 ) ? y)dt + dW1 ,
2
2
(y(1 ? x ? y ) + y)dt + dW2 .
(31)
(32)
For this model we simulated a path of size M = 106 on a regular grid with width ?t = 0.002 from
the corresponding SDE and kept every 100th sample point as observation, resulting in N = 104 data
points. In the inference shown in figure 4 we used a polynomial kernel (c = 0) of order p = 4.
7
Discussion
It would be interesting to replace the ad hoc local linear approximation of the posterior drift by a
more flexible time dependent Gaussian model. This could be optimized in a variational EM approximation by minimizing a free energy in the E-step, which contains the Kullback-Leibler divergence
between the linear and true processes. Such a method could be extended to noisy observations and
the case, where some components of the state vector are not observed. Finally, this method could be
turned into a variational Bayesian approximation, where one optimizes posteriors over both drifts
and over state paths. The path probabilities are then influenced by the uncertainties in the drift
estimation, which would lead to more realistic predictions of error bars.
Acknowledgments This work was supported by the European Community?s Seventh Framework
Programme (FP7, 2007-2013) under the grant agreement 270327 (CompLACS).
8
References
[1] Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. JMLR
WC&P, 5:567?574, 2009.
[2] Marc Deisenroth and Shakir Mohamed. Expectation propagation in Gaussian process dynamical systems.
In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural
Information Processing Systems 25, pages 2618?2626. 2012.
[3] Jonathan Ko and Dieter Fox. GP-BayesFilters: Bayesian filtering using Gaussian process prediction and
observation models. Autonomous Robots, 27(1):75?90, July 2009.
[4] C?edric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, and John Shawe-Taylor. Variational
inference for diffusion processes. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in
Neural Information Processing Systems 20, pages 17?24. MIT Press, Cambridge, MA, 2008.
[5] Jos?e Bento Ayres Pereira, Morteza Ibrahimi, and Andrea Montanari. Learning networks of stochastic
differential equations. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta,
editors, Advances in Neural Information Processing Systems 23, pages 172?180. 2010.
[6] Danilo J. Rezende, Daan Wierstra, and Wulfram Gerstner. Variational learning for recurrent spiking
networks. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 24, pages 136?144. 2011.
[7] Simon Lyons, Amos Storkey, and Simo Sarkka. The coloured noise expansion and parameter estimation
of diffusion processes. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger,
editors, Advances in Neural Information Processing Systems 25, pages 1961?1969. 2012.
[8] Omiros Papaspiliopoulos, Yvo Pokern, Gareth O. Roberts, and Andrew M. Stuart. Nonparametric estimation of diffusions: a differential equations approach. Biometrika, 99(3):511?531, 2012.
[9] Yvo Pokern, Andrew M. Stuart, and J.H. van Zanten. Posterior consistency via precision operators
for Bayesian nonparametric drift estimation in SDEs. Stochastic Processes and their Applications,
123(2):603?628, 2013.
[10] P. E. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations. Springer, New
York, corrected edition, June 2011.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[12] Lehel Csat?o, Manfred Opper, and Ole Winther. TAP Gibbs free energy, belief propagation and sparsity.
In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing
Systems 14, pages 657?663. MIT Press, 2002.
[13] C. W. Gardiner. Handbook of Stochastic Methods. Springer, Berlin, second edition, 1996.
[14] Manfred Opper, Andreas Ruttor, and Guido Sanguinetti. Approximate inference in continuous time
Gaussian-jump processes. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1831?1839. 2010.
[15] Florian Stimberg, Manfred Opper, and Andreas Ruttor. Bayesian inference for change points in dynamical
systems with reusable states?a Chinese restaurant process approach. JMLR WC&P, 22:1117?1124,
2012.
[16] Frank Kwasniok. Analysis and modelling of glacial climate transitions using simple dynamical systems.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,
371(1991), 2013.
9
| 4967 |@word version:2 inversion:1 polynomial:6 norm:1 seems:1 hu:1 d2:1 linearized:2 tried:1 covariance:2 p0:6 edric:1 contains:5 series:1 selecting:1 denoting:1 past:2 discretization:4 dx:9 attracted:1 written:1 dw1:1 john:1 additive:1 realistic:1 numerical:1 sdes:4 stationary:1 xk:7 core:2 record:1 manfred:6 recompute:1 provides:1 philipp:2 obser:1 bayesfilters:1 wierstra:1 mathematical:1 differential:13 become:2 ik:6 yuan:1 consists:1 dan:1 introduce:1 x0:17 expected:1 rapid:2 andrea:1 roughly:1 discretized:2 riemann:1 actual:1 lyon:1 pf:2 becomes:1 provided:1 estimating:1 notation:2 mass:1 lowest:1 sde:11 unobserved:2 nonrandom:1 guarantee:2 every:3 exactly:2 biometrika:1 wrong:1 platt:1 grant:1 planck:1 ice:3 engineering:1 local:2 treat:2 limit:2 encoding:1 path:24 fluctuation:2 black:6 batz:2 twice:2 initialization:2 k:6 chose:1 archambeau:1 collapse:1 practical:1 acknowledgment:1 lost:1 practice:1 differs:1 x3:1 cold:1 empirical:1 confidence:3 regular:4 suggest:1 cannot:2 close:1 operator:6 influence:1 restriction:1 equivalent:3 map:4 deterministic:2 demonstrated:1 www:1 williams:3 cluttered:2 l:2 shen:1 simplicity:2 estimator:1 q:1 regarded:1 dw:4 stability:1 autonomous:1 pt:3 suppose:1 play:1 exact:5 guido:1 gps:1 us:1 homogeneous:2 agreement:1 storkey:1 approximated:4 ibrahimi:1 observed:4 role:1 yvo:2 cornford:1 nbi:1 region:1 cycle:1 culotta:2 highest:1 yk:21 dynamic:8 depend:1 titsias:1 f2:1 completely:1 easily:2 kolmogorov:1 derivation:1 separated:1 fast:1 effective:1 ole:1 zemel:3 vations:1 quite:1 heuristic:1 larger:3 supplementary:2 s:2 gp:22 noisy:2 varq:1 shakir:1 bento:1 hoc:1 sequence:1 product:2 tu:6 j2:1 turned:1 realization:1 climatic:1 roweis:1 inducing:1 dirac:1 convergence:2 double:6 p:2 generating:1 tk:9 develop:3 completion:1 recurrent:1 andrew:2 minor:1 qt:6 noticeable:1 eq:9 indicate:1 direction:1 concentrate:1 stochastic:10 subsequently:1 material:2 bin:1 inspecting:1 fnew:1 pou:2 exp:5 stabilizes:1 continuum:2 consecutive:3 estimation:9 bridge:3 successfully:1 amos:1 mit:3 clearly:1 gaussian:18 always:1 avoid:3 derived:1 rezende:1 june:1 improvement:1 modelling:1 likelihood:11 rigorous:1 baseline:1 sense:1 inference:11 dependent:1 entire:1 typically:1 lehel:1 hidden:1 koller:1 expand:1 arg:1 flexible:1 denoted:1 exponent:1 smoothing:1 integration:1 marginal:7 equal:2 once:1 undisturbed:1 field:1 sampling:5 stuart:2 piecewise:1 few:1 densely:1 divergence:1 xtj:1 replaced:6 ourselves:1 attempt:2 interest:2 introduces:1 integral:3 partial:1 simo:1 fox:1 incomplete:1 taylor:5 logarithm:1 initialized:1 e0:2 fitted:1 modeling:1 bistability:1 maximization:1 ordinary:2 deviation:2 euler:1 seventh:1 too:1 dp0:1 periodic:3 st:4 density:10 winther:1 jos:1 complacs:1 together:2 squared:1 again:2 containing:1 creating:1 resort:2 toy:3 de:4 ornstein:5 depends:2 vi:1 ad:1 red:6 hf:4 simon:1 formed:1 wiener:3 variance:7 stretching:1 correspond:1 yield:3 dealt:2 bayesian:6 eln:1 multiplying:1 trajectory:1 influenced:1 energy:3 mohamed:1 naturally:1 sampled:1 maruyama:1 treatment:1 lim:3 organized:1 ou:2 actually:1 back:1 higher:3 dt:10 attained:1 danilo:1 done:1 evaluated:1 just:1 d:3 hand:1 replacing:3 nonlinear:2 propagation:2 defines:1 dietterich:1 true:5 evolution:1 analytically:3 hence:2 leibler:2 deal:2 white:1 isotope:5 sin:1 climate:1 width:2 noted:1 complete:2 bmin:2 confusion:1 bring:1 temperature:4 oxygen:3 variational:5 recently:1 ornsteinuhlenbeck:1 functional:4 spiking:1 physical:2 linking:1 approximates:1 measurement:1 cambridge:1 gibbs:2 rd:2 grid:5 outlined:1 glacial:2 consistency:1 shawe:4 dj:2 dot:2 stable:4 robot:1 gt:6 posterior:27 multivariate:4 recent:3 closest:1 brownian:1 optimizes:1 driven:1 sarkka:1 certain:1 meta:1 seen:1 florian:1 determine:1 period:1 july:1 full:2 faster:1 calculation:1 impact:1 prediction:9 regression:4 ko:1 expectation:10 histogram:3 kernel:18 uhlenbeck:5 iteration:2 whereas:1 fine:1 interval:6 macroscopic:2 exhibited:1 lafferty:2 call:1 variety:1 restaurant:1 restrict:1 perfectly:1 andreas:4 inner:1 expression:5 bartlett:3 becker:1 f:8 york:1 generally:1 governs:1 clear:1 nonparametric:5 locally:1 imputed:1 http:1 canonical:1 estimated:5 csat:1 blue:4 discrete:3 kloeden:1 reusable:1 drawn:1 imputation:1 diffusion:12 kept:2 backward:1 resorted:1 year:2 sum:3 facilitated:1 inverse:2 uncertainty:1 family:1 appendix:2 dy:1 bound:3 display:2 fold:2 nontrivial:1 gardiner:1 x2:5 wc:2 min:1 expanded:1 he0:1 metastable:1 alternate:1 combination:1 conjugate:2 smaller:1 em:12 b:2 dieter:1 computationally:2 equation:13 ln:8 discus:1 turn:2 singer:1 tractable:1 fp7:1 end:1 available:1 generalizes:1 observe:1 hierarchical:1 occurrence:1 alternative:2 weinberger:3 existence:1 original:2 denotes:4 running:2 include:1 michalis:1 ghahramani:1 build:1 chinese:1 society:1 realized:1 quantity:3 spike:1 strategy:1 costly:1 concentration:3 diagonal:1 microscopic:4 distance:1 lou:2 berlin:7 simulated:4 assuming:2 length:1 minimizing:1 unfortunately:3 mostly:1 robert:1 frank:1 negative:2 upper:1 observation:31 markov:3 daan:1 snippet:2 extended:1 arbitrary:4 community:2 drift:52 introduced:1 copenhagen:1 required:1 kl:1 optimized:1 philosophical:1 tap:1 quadratically:1 nip:1 suggested:1 bar:1 dynamical:7 usually:5 sparsity:2 royal:1 belief:1 suitable:1 natural:1 treated:1 indicator:1 picture:3 carried:1 naive:1 kj:5 prior:8 coloured:1 evolve:1 dxt:2 interesting:1 filtering:1 consistent:1 editor:7 supported:1 last:1 free:3 rasmussen:1 infeasible:1 burges:2 institute:1 stimberg:1 midpoint:1 sparse:15 van:1 opper:6 calculated:2 transition:8 world:2 dimension:1 evaluating:1 author:1 collection:1 forward:1 jump:1 programme:1 far:1 cope:1 transaction:1 approximate:13 kullback:2 ruttor:4 confirm:1 handbook:1 summing:1 sanguinetti:1 continuous:4 latent:2 ku:1 expansion:2 zanten:1 bottou:2 european:1 gerstner:1 constructing:1 marc:1 diag:1 did:1 dense:3 main:1 montanari:1 s2:1 noise:4 hyperparameters:2 edition:2 x1:4 papaspiliopoulos:1 precision:3 pereira:4 exponential:1 crude:3 jmlr:2 weighting:1 bad:1 xt:50 symbol:1 x:1 dk:1 evidence:2 intractable:2 linearization:1 conditioned:3 morteza:1 platen:1 univariate:2 expressed:3 contained:2 omiros:1 applies:1 springer:2 corresponds:2 fokker:1 gareth:1 ma:1 conditional:1 goal:1 viewed:1 rbf:8 replace:2 considerable:1 feasible:1 change:2 wulfram:1 specifically:2 infinite:1 uniformly:1 determined:1 sampler:1 corrected:1 called:1 deisenroth:1 fulfills:1 jonathan:1 cumulant:1 d1:3 |
4,384 | 4,968 | Online Learning of Nonparametric Mixture Models
via Sequential Variational Approximation
Dahua Lin
Toyota Technological Institute at Chicago
[email protected]
Abstract
Reliance on computationally expensive algorithms for inference has been limiting
the use of Bayesian nonparametric models in large scale applications. To tackle this
problem, we propose a Bayesian learning algorithm for DP mixture models. Instead of following the conventional paradigm ? random initialization plus iterative
update, we take an progressive approach. Starting with a given prior, our method
recursively transforms it into an approximate posterior through sequential variational approximation. In this process, new components will be incorporated on the
fly when needed. The algorithm can reliably estimate a DP mixture model in one
pass, making it particularly suited for applications with massive data. Experiments
on both synthetic data and real datasets demonstrate remarkable improvement on
efficiency ? orders of magnitude speed-up compared to the state-of-the-art.
1
Introduction
Bayesian nonparametric mixture models [7] provide an important framework to describe complex
data. In this family of models, Dirichlet process mixture models (DPMM) [1, 15, 18] are among
the most popular in practice. As opposed to traditional parametric models, DPMM allows the number of components to vary during inference, thus providing great flexibility for explorative analysis. Nonetheless, the use of DPMM in practical applications, especially those with massive data,
has been limited due to high computational cost. MCMC sampling [12, 14] is the conventional approach to Bayesian nonparametric estimation. With heavy reliance on local updates to explore the
solution space, they often show slow mixing, especially on large datasets. Whereas the use of splitmerge moves and data-driven proposals [9,17,20] has substantially improved the mixing performance,
MCMC methods still require many passes over a dataset to reach the equilibrium distribution.
Variational inference [4, 11, 19, 22], an alternative approach based on mean field approximation, has
become increasingly popular recently due to better run-time performance. Typical variational methods for nonparametric mixture models rely on a truncated approximation of the stick breaking construction [16], which requires a fixed number of components to be maintained and iteratively updated
during inference. The truncation level are usually set conservatively to ensure approximation accuracy, incurring considerable amount of unnecessary computation.
The era of Big Data presents new challenges for machine learning research. Many real world applications involve massive amount of data that even cannot be accommodated entirely in the memory.
Both MCMC sampling and variational inference maintain the entire configuration and perform iterative updates of multiple passes, which are often too expensive for large scale applications. This
challenge motivated us to develop a new learning method for Bayesian nonparametric models that
can handle massive data efficiently. In this paper, we propose an online Bayesian learning algorithm
for generic DP mixture models. This algorithm does not require random initialization of components.
Instead, it begins with the prior DP(??) and progressively transforms it into an approximate posterior
of the mixtures, with new components introduced on the fly as needed. Based on a new way of variational approximation, the algorithm proceeds sequentially, taking in one sample at a time to make the
1
update. We also devise specific steps to prune redundant components and merge similar ones, thus
further improving the performance. We tested the proposed method on synthetic data as well as two
real applications: modeling image patches and clustering documents. Results show empirically that
the proposed algorithm can reliably estimate a DP mixture model in a single pass over large datasets.
2
Related Work
Recent years witness lots of efforts devoted to developing efficient learning algorithms for Bayesian
nonparametric models. A n important line of research is to accelerate the mixing in MCMC through
better proposals. Jain and Neal [17] proposed to use split-merge moves to avoid being trapped in
local modes. Dahl [6] developed the sequentially allocated sampler, where splits are proposed by
sequentially allocating observations to one of two split components through sequential importance
sampling. This method was recently extended for HDP [20] and BP-HMM [9].
There has also been substantial advancement in variational inference. A significant development along
is line is the Stochastic Variational Inference, a framework that incorporates stochastic optimization
with variational inference [8]. Wang et al. [23] extended this framework to the non-parametric realm,
and developed an online learning algorithm for HDP [18]. Wang and Blei [21] also proposed a
truncation-free variational inference method for generic BNP models, where a sampling step is used
for updating atom assignment that allows new atoms to be created on the fly.
Bryant and Sudderth [5] recently developed an online variational inference algorithm for HDP, using
mini-batch to handle streaming data and split-merge moves to adapt truncation levels. They tried to
tackle the problem of online BNP learning as we do, but via a different approach. First, we propose a
generic method while they focuses on topic models. The designs are also different ? our method starts
from scratch and progressively adds new components. Its overall complexity is O(nK), where n and
K are number of samples and expected number of components. Bryant?s method begins with random
initialization and relies on splits over mini-batch to create new topics, resulting in the complexity of
O(nKT ), where T is the number of iterations for each mini-batch. The differences stem from the
theoretical basis ? our method uses sequential approximation based on the predictive law, while theirs
is an extension of the standard truncation-based model.
Nott et al. [13] recently proposed a method, called VSUGS, for fast estimation of DP mixture models.
Similar to our algorithm, the VSUGS method proposed takes a sequential updating approach, but
relies on a different approximation. Particularly, what we approximate is a joint posterior over both
data allocation and model parameters, while VSUGS is based on the approximating the posterior of
data allocation. Also, VSUGS requires fixing a truncation level T in advance, which may lead to
difficulties in practice (especially for large data). Our algorithm provides a way to tackle this, and no
longer requires fixed truncation.
3
Nonparametric Mixture Models
This section provide a brief review of Dirichlet Process Mixture Model ? one of the most widely
used nonparametric mixture models. A Dirichlet Process (DP), typically denoted by DP(??) is
characterized by a concentration parameter ? and a base distribution ?. It has been shown that
sample paths of a DP are almost surely discrete [16], and can be expressed as
D=
?
X
?k ??k ,
with ?k = vk
k=1
k?1
Y
vl , vk ? Beta(1, ?k ), ?k = 1, 2, . . . .
(1)
l=1
This is often referred to as the stick breaking representation, and ?k is called an atom. Since an
atom can be repeatedly generated from D with positive probability, the number of distinct atoms is
usually less than the number of samples. The Dirichlet Process Mixture Model (DPMM) exploits this
property, and uses a DP sample as the prior of component parameters. Below is a formal definition:
D ? DP (??),
?i ? ?, xi ? F (?|?i ), ?i = 1, . . . , n.
(2)
Consider a partition {C1 , . . . , CK } of {1, . . . , n} such that ?i are identical for all i ? Ck , which
we denote by ?k . Instead of maintaining ?i explicitly, we introduce an indicator zi for each i with
2
?i = ?zi . Using this clustering notation, this formulation can be rewritten equivalently as follows:
z1:n ? CRP(?), ?k ? ?, ?k = 1, 2, . . . K
xi ? F (?|?zi ), ?i = 1, 2, . . . , n.
(3)
Here, CRP(?) denotes a Chinese Restaurant Prior, which is a distribution over exchangeable partitions. Its probability mass function is given by
K
?(?)?K Y
pCRP (z1:n |?) =
?(|Ck |).
(4)
?(? + n)
k=1
4
Variational Approximation of Posterior
Generally, there are two approaches to learning a mixture model from observed data, namely Maximum likelihood estimation (MLE) and Bayesian learning. Specifically, maximum likelihood estimation seeks an optimal point estimate of ?, while Bayesian learning aims to derive the posterior
distribution over the mixtures. Bayesian learning takes into account the uncertainty about ?, often
resulting in better generalization performance than MLE.
In this paper, we focus on Bayesian learning. In particular, for DPMM, the predictive distribution of
component parameters, conditioned on a set of observed samples x1:n , is given by
(5)
p(?0 |x1:n ) = ED|x1:n [p(?0 |D)] .
Here, ED|x1:n takes the expectation w.r.t. p(D|x1:n ). In this section, we derive a tractable approximation of this predictive distribution based on a detailed analysis of the posterior.
4.1
Posterior Analysis
Let D ? DP(??) and ?1 , . . . , ?n be iid samples from D, {C1 , . . . , CK } be a partition of {1, . . . , n}
such that ?i for all i ? Ck are identical, and ?k = ?i ?i ? Ck . Then the posterior distribution of D
remains a DP, as D|?1:n ? DP(?
??
?), where ?
? = ? + n, and
K
?
?=
X |Ck |
?
?+
?? .
?+n
?+n k
(6)
k=1
The atoms are generally unobservable, and therefore it is more interesting in practice to consider the
posterior distribution of D given the observed samples. For this purpose, we derive the lemma below
that provides a constructive characterization of the posterior distribution given both the observed
samples x1:n and the partition z.
Lemma 1. Consider the DPMM in Eq.(3). Drawing a sample from the posterior distribution
p(D|z1:n , x1:n ) is equivalent to constructing a random probability measure as follows
?0 D 0 +
K
X
?k ??k ,
k=1
with D0 ? DP(??), (?0 , ?1 , . . . , ?k ) ? Dir(?, m1 , . . . , mK ), ?k ? ?|Ck . (7)
Q
Here, mk = |Ck |, ?|Ck is a posterior distribution given by i.e. ?|Ck (d?) ? ?(d?) i?Ck F (xi |?).
This lemma immediately follows from the Theorem 2 in [10] as DP is a special case of the socalled Normalized Random Measures with Independent Increments (NRMI). It is worth emphasizing
that p(D|x, z) is no longer a Dirichlet process, as the locations of the atoms ?1 , . . . , ?K are nondeterministic, instead they follow the posterior distributions ?|Ck .
By marginalizing out the partition z1:n , we obtain the posterior distribution p(D|x1:n ):
X
p(D|x1:n ) =
p(z1:n |x1:n )p(D|x1:n , z1:n ).
(8)
z1:n
(z)
(z)
Let {C1 , . . . , CK } be the partition corresponding to z1:n , we have
(z)
K
YZ
Y
p(z1:n |x1:n ) ? pCRF (z1:n |?)
?(d?k )
F (xi |?k ).
k=1
3
(z)
i?Ck
(9)
4.2
Variational Approximation
Computing the predictive distribution based on Eq.(8) requires enumerating all possible partitions,
which grow exponentially as n increases. To tackle this difficulty, we resort to variational approximation, that is, to choose a tractable distribution to approximate p(D|x1:n , z1:n ).
In particular, we consider a family of random probability measures that can be expressed as follows:
n
XY
q(D|?, ?) =
?i (zi )q?(z) (D|z1:n ).
(10)
z1:n i=1
Here,
(z)
q? (D|z1:n )
d
is a stochastic process conditioned on z1:n , defined as
q?(z) (D|z1:n ) ? ?0 D0 +
K
X
?k ??k ,
k=1
(z)
(z)
with D0 ? DP(??), (?0 , ?1 , . . . , ?K ) ? Dir(?, m1 , . . . , mK ), ?k ? ?k . (11)
d
(z)
Here, we use ? to indicate that drawing a sample from q? is equivalent to constructing one according
(z)
(z)
to the right hand side. In addition, mk = |Ck | is the cardinality of the k-th cluster w.r.t. z1:n , and
?k is a distribution over component parameters that is independent from z.
The variational construction in Eq.(10) and (11) is similar to Eq.(7) and
Q (8), except for two significant
differences: (1) p(z1:n |x1:n ) is replaced by a product distribution i ?i (zi ), and (2) ?|Ck , which
depends on z1:n , is replaced by an independent distribution ?k . With this design, zi for different i and
?k for different k are independent w.r.t. q, thus resulting in a tractable predictive law below: Let q be
a random probability measure given by Eq.(10) and (11), then
K Pn
X
?
0
0
i=1 ?i (k)
?(? ) +
?k (?0 ).
(12)
Eq(D|?,?) [p(? |D)] =
?+n
?+n
k=1
The approximate posterior has two sets of parameters: ? , (?1 , . . . , ?n ) and ? , (?1 , . . . , ?n ). With
this approximation, the task of Bayesian learning reduces to the problem of finding the optimal setting
of these parameters such that q(D|?, ?) best approximates the true posterior distribution.
4.3
Sequential Approximation
The first problem here is to determine the value of K. A straightforward approach is to fix K to a large
number as in the truncated methods. This way, however, would incur substantial computational costs
on unnecessary components. We take a different approach here. Rather than randomly initializing a
fixed number of components, we begin with an empty model (i.e. K = 1) and progressively refine
the model as samples come in, adding new components on the fly when needed.
Specifically, when the first sample x1 is observed, we introduce the first component and denote the
posterior for this component by ?1 . As there is only one component at this point, we have z1 = 1,
(1)
i.e. ?1 (z1 = 1) = 1, and the posterior distribution over the component parameter is ?1 (d?) ?
?(d?)F (x1 |?). Samples are brought in sequentially. In particular, we compute ?i , and update ? (i?1)
to ? i upon the arrival of the i-th sample xi .
(i)
(i)
Suppose we have ? = (?1 , . . . , ?i ) and ? (i) = (?1 , . . . , ?K ) after processing i samples. To explain
xi+1 , we can use either of the K existing components or introduce a new component ?k+1 . Then the
posterior distribution of zi+1 , ?1 , . . . , ?K+1 given x1 , . . . , xn , xn+1 is
p(zi+1 , ?1:K+1 |x1:i+1 ) ? p(zi+1 , ?1:K+1 |x1:i )p(xi+1 |zi+1 , ?1:K+1 ).
(13)
Using the tractable distribution q(?|?1:i , ? (i) ) in Eq.(10) to approximate the posterior p(?|x1:i ), we get
p(zi+1 , ?1:K+1 |x1:i+1 ) ? q(zi+1 |?1:i , ? (i) )p(xi+1 |zi+1 , ?1:K+1 ).
(i+1)
(14)
Then, the optimal settings of qi+1 and ?
that minimizes the Kullback-Leibler divergence between
q(zi+1 , ?1:K+1 |q1:i+1 , ? (i+1) ) and the approximate posterior in Eq.(14) are given as follows:
(
(i) R
(i)
wk ? F (xi+1 |?)?k (d?) (k ? K),
R
?i+1 ?
(15)
? ? F (xi+1 |?)?(d?)
(k = K + 1),
4
Algorithm 1 Sequential Bayesian Learning of DPMM (for conjugate cases).
Require: base measure params: ?, ?0 , observed samples: x1 , . . . , xn , and threshold
Let K = 1, ?1 (1) = 1, w1 = ?1 , ?1 = ?(x1 ), and ?10 = 1.
for i = 2 : n do
Ti ? T (xi ), and bi ? b(xi )
(
B(?k + Ti , ?k0 + ? ) ? B(?k , ?k0 ) ? bi (k = 1, . . . , K)
marginal log-likelihood: hi (k) ?
B(? + Ti , ?0 + ? ) ? B(?, ?0 ) ? bi
(k = K + 1)
P
hi (k)
hi (l)
?i (k) ? wk e
/ l wl e
for k = 1, . . . , K + 1 with wK+1 = ?
if ?i (K + 1) > then
wk ? wk + ?i (k), ?k ? ?k + ?i (k)Ti , and ?k0 ? ?k0 + ?i (k)? , for k = 1, . . . , K
0
wK+1 ? ?i (K + 1), ?K+1 ? ?i (K + 1)Ti , and ?K+1
? ?i (K + 1)?
K ?K +1
else
P
re-normalize ?i such that K
k=1 ?i (k) = 1
wk ? wk + ?i (k), ?k ? ?k + ?i (k)Ti , and ?k0 ? ?k0 + ?i (k)? , for k = 1, . . . , K
end if
end for
(i)
with wk =
Pi
j=1
?j (k), and
(i+1)
?k
(d?)
(
Qi+1
?(d?) j=1 F (xj |?)?j (k)
?
?(d?)F (xi+1 |?)?i+1 (k)
(k ? K),
(k = K + 1).
(16)
Discussion. There is a key distinction between this approximation scheme and conventional approaches: Instead of seeking the approximation of p(D|x1:n ), which is very difficult (D is infinite)
and unnecessary (only a finite number of components are useful), we try to approximate the posterior
of a finite subset of latent variables that are truly relevant for prediction, namely z and ?1:K+1 .
This sequential approximation scheme introduces a new component for each sample, resulting in
n components over the entire dataset. This, however, is unnecessary. We find empirically that for
most samples, ?i (K + 1) is negligible, indicating that the sample is adequately explained by existing
component, and there is no need of new components. In practice, we set a small value and increase
K only when ?i (K + 1) > . This simple strategy is very effective in controlling the model size.
5
Algorithm and Implementation
This section discusses the implementation of the sequential Bayesian learning algorithm under two
different circumstances: (1) ? and F are exponential family distributions that form a conjuate pair,
and (2) ? is not a conjugate prior w.r.t. F .
Conjugate Case.
In general, when ? is conjugate to F , they can be written as follows:
?(d?|?, ?0 ) = exp ?T ?(?) ? ?0 A(?) ? B(?, ?0 ) h(d?),
F (x|?) = exp ?(?)T T (x) ? ? A(?) ? b(x) .
(17)
(18)
0
Here, the prior measure ? has a pair of natural parameters: (?, ? ). Conditioned on a set of observations
Pn x1 , . . . , xn , the posterior distribution remains in the same family as ? with parameters
(? + i=1 T (xi ), ?0 + n? ). In addition, the marginal likelihood is given by
Z
F (x|?)?(d?|?, ?0 ) = exp (B(? + T (x), ?0 + ? ) ? B(?, ?0 ) ? b(x)) .
(19)
?
In such cases, both the base measure ? and the component-specific posterior measures ?k can be
represented using the natural parameter pairs, which we denote by (?, ?0 ) and (?k , ?k0 ). With this
notation, we derive a sequential learning algorithm for conjugate cases, as shown in Alg 1.
Non-conjugate Case. In practical models, it is not uncommon that ? and F are not a conjugate
pair. Unlike in the conjugate cases discussed above, there exist no formulas to update posterior
5
parameters or to compute marginal likelihood in general. Here, we propose to address this
Qn issue using
stochastic optimization. Consider a posterior distribution given by p(?|x1:n ) ? ?(?) i=1 F (xi |?).
A stochastic optimization method finds the MAP estimate of ? through update steps as below:
? ? ? + ?i (?? log ?(?) + n?? log F (xi |?)) .
(20)
The basic idea here is to use the gradient computed at a particular sample xi to approximate the
true
procedure converges to a (local) maximum, as long as the step size ?i satisfy
P? gradient. ThisP
?
2
?
=
?
and
i=1 i
i=1 ?i < ?.
Incorporating the stochastic optimization method into our algorithm, we obtain a variant of Alg 1. The
general procedure is similar, except for the following changes: (1) It maintains point estimates of the
component parameters instead of the posterior, which we denote by ??1 , . . . , ??K . (2) It computes the
log-likelihood as hi (k) = log F (xi |??k ). (3) The estimates of the component parameters are updated
using the formula below:
(i)
(i?1)
??k ? ??k
+ ?i (?? log ?(?) + n?i (k)?? log F (xi |?)) .
Following the common practice of stochastic optimization, we set ?i = i
??
(21)
/n with ? ? (0.5, 1].
Prune and Merge. As opposed to random initialization, components created during this sequential construction are often truly needed, as the decisions of creating new components are based on
knowledge accumulated from previous samples. However, it is still possible that some components
introduced at early iterations would become less useful and that multiple components may be similar.
We thus introduce a mechanism to remove undesirable components and merge similar ones.
(i)
We identify opportunities to make such adjustments by looking at the weights. Let w
?k =
Pi
(i) P
(i)
(i)
wk / l wl (with wk =
?
(k))
be
the
relative
weight
of
a
component
at
the
i-th
iteraj=1 j
tion. Once the relative weight of a component drops below a small threshold ?r , we remove it to save
unnecessary computation on this component in the future.
The similarity between two components ?k and ?k0 can be measured in terms of the distance bePi
tween ?i (k) and ?i (k 0 ) over all processed samples, as d? (k, k 0 ) = i?1 j=1 |?j (k) ? ?j (k 0 )|. We
increment ?i (k) to ?i (k) + ?i (k 0 ) when ?k and ?0k are merged (i.e. d? (k, k 0 ) < ?d ). We also merge
the associated sufficient statistics (for conjugate case) or take an weighted average of the parameters
(for non-conjugate case). Generally, there is no need to perform such checks at every iteration. Since
computing this distance between a pair of components takes O(n), we propose to examine similarities
at an O(i ? K)-interval so that the amortized complexity is maintained at O(nK).
Discussion. As compared to existing methods, the proposed method has several important advantages. First, it builds up the model on the fly, thus avoiding the need of randomly initializing a set of
components as required by truncation-based methods. The model learned in this way can be readily
extended (e.g. adding more components or adapting existing components) when new data is available.
More importantly, the algorithm can learn the model in one pass, without the need of iterative updates
over the data set. This distinguishes it from MCMC methods and conventional variational learning
algorithms, making it a great fit for large scale problems.
6
Experiments
To test the proposed algorithm, we conducted experiments on both synthetic data and real world
applications ? modeling image patches and document clustering. All algorithms are implemented
using Julia [2], a new language for high performance technical computing.
6.1
Synthetic Data
First, we study the behavior of the proposed algorithm on synthetic data. Specifically, we constructed
a data set comprised of 10000 samples in 9 Gaussian clusters of unit variance. The distances between
these clusters were chosen such that there exists moderate overlap between neighboring clusters. The
estimation of these Gaussian components are based on the DPMM below:
D ? DP ? ? N (0, ?p2 I) , ? i ? D, xi ? N (? i , ?x2 I).
(22)
6
8
8
6
6
4
4
2
2
0
0
?2
?2
?4
?4
?8
?8
?5.5
?6
?6
?6
?4
?2
0
2
4
6
8
?8
?8
?6
?4
?2
CGS
2
4
6
8
TVF
8
8
6
6
4
4
2
2
0
0
?2
?2
?4
?4
?6
?8
?8
0
joint log?lik
?6
4
x 10
?6.5
?7
?7.5
?8
?6
?6
?4
?2
0
SVA
2
4
6
8
?8
?8
?6
?4
?2
0
2
4
6
8
SVA-PM
?8.5
0
Figure 1: Gaussian clusters on synthetic data obtained using different
methods. Both MC-SM and SVA-PM
identified the 9 clusters correctly. The
result of MC-SM is omitted here, as it
looks the same as SVA-PM.
20
40
60
minute
80
CGS
MC?SM
TVF
SVA
SVA?PM
100
Figure 2: Joint log-likelihood on synthetic data as functions of run-time. The likelihood values were evaluated
on a held-out testing set. (Best to view with color)
Here, we set ? = 1, ?p = 100 and ?x = 1.
We tested the following inference algorithms: Collapsed Gibbs sampling (CGS) [12], MCMC with
Split-Merge (MC-SM) [6], Truncation-Free Variational Inference (TFV) [21], Sequential Variational
Approximation (SVA), and its variant Sequential Variational Approximation with Prune and Merge
(SVA-PM). For CGS, MC-SM, and TFV, we run the updating procedures iteratively for one hour,
while for SVA and SVA-PM, we run only one-pass.
Figure 1 shows the resulting components. CGS and TFV yield obviously redundant components.
This corroborates observations in previous work [9]. Such nuisances are significantly reduced in
SVA, which only occasionally brings in redundant components. The key difference that leads to this
improvement is that CGS and TFV rely on random initialization to bootstrap the algorithm, which
would inevitably introduce similar components, while SVA leverages information gained from previous samples to decide whether new components are needed. Both MC-SM and SVA-PM produce
desired mixtures, demonstrating the importance of an explicit mechanism to remove redundancy.
Figure 2 plots the traces of joint log-likelihoods evaluated on a held-out set of samples. We can see that
SVA-PM quickly reaches the optimal solution in a matter of seconds. SVA also gets to a reasonable
solution within seconds, and then the progress slows down. Without the prune-and-merge steps, it
takes much longer for redundant components to fade out. MC-SM eventually reaches the optimal
solution after many iterations. Methods relying on local updates, including CGS and TFV, did not
even come close to the optimal solution within one hour. These results clearly demonstrate that our
progressive strategy, which gradually constructs the model through a series of informed decisions, is
much more efficient than random initialization followed by iterative updating.
6.2
Modeling Image Patches
Image patches, which capture local characteristics of images, play a fundamental role in various
computer vision tasks, such as image recovery and scene understanding. Many vision algorithms rely
on a patch dictionary to work. It has been a common practice in computer vision to use parametric
methods (e.g. K-means) to learn a dictionary of fixed size. This approach is inefficient when large
datasets are used. It is also difficult to be extended when new data with a fixed K.
To tackle this problem, we applied our method to learn a nonparametric dictionary from the SUN
database [24], a large dataset comprised of over 130K images, which capture a broad variety of
scenes. We divided all images into two disjoint sets: a training set with 120K images and a testing
set with 10K. We extracted 2000 patches of size 32 ? 32 from each image, and characterize each
patch by a 128-dimensional SIFT feature. In total, the training set contains 240M feature vectors.
We respectively run TFV, SVA, and SVA-SM to learn a DPMM from the training set, based on the
7
550
?100
avg. pred. log?lik
avg. pred. log?lik
500
?120
?140
?160
?180
Figure 3: Examples of image patche clusters learned using SVA-PM. Each row corresponds to a cluster. We can see
similar patches are in the same
cluster.
0
2
4
6
TVF
SVA
SVA?PM
8
450
400
350
300
0
2
hour
Figure 4:
Average loglikelihood on image modeling
as functions of run-time.
4
6
hour
8
TVF
SVA
SVA?PM
10
Figure 5:
Average loglikelihood of document clusters
as functions of run-time.
formulation given in Eq.(22), and evaluate the average predictive log-likelihood over the testing set as
the measure of performance. Figure 3 shows a small subset of patch clusters obtained using SVA-PM.
Figure 4 compares the trajectories of the average log-likelihoods obtained using different algorithms.
TFV takes multiple iterations to move from a random configuration to a sub-optimal one and get
trapped in a local optima. SVA steadily improves the predictive performance as it sees more samples.
We notice in our experiments that even without an explicit redundancy-removal mechanism, some
unnecessary components can still get removed when their relative weights decreases and becomes
negligible. SVM-PM accelerates this process by explicitly merging similar components.
6.3
Document Clustering
Next, we apply the proposed method to explore categories of documents. Unlike standard topic modeling task, this is a higher level application that builds on top of the topic representation. Specifically,
we first obtain a collection of m topics from a subset of documents, and characterize all documents by
topic proportions. We assume that the topic proportion vector is generated from a category-specific
Dirichlet distribution, as follows
D ? DP (? ? Dirsym (?p )) ,
? i ? D, xi ? Dir(?x ? i ).
(23)
Here, the base measure is a symmetric Dirichlet distribution. To generate a document, we draw a
mean probability vector ? i from D, and generates the topic proportion vector xi from Dir(?x ? i ).
The parameter ?x is a design parameter that controls how far xi may deviate from the categoryspecific center ? i . Note that this is not a conjugate model, and we use stochastic optimization instead
of Bayesian updates in SVA (see section 5).
We performed the experiments on the New York Times database, which contains about 1.8M articles
from year 1987 to 2007. We pruned the vocabulary to 5000 words by removing stop words and
those with low TF-IDF scores, and obtained 150 topics by running LDA [3] on a subset of 20K
documents. Then, each document is represented by a 150-dimensional vector of topic proportions.
We held out 10K documents for testing and use the remaining to train the DPMM. We compared SVA,
SVA-PM, and TVF. The traces of log-likelihood values are shown in Figure 5. We observe similar
trends as above: SVA and SVA-PM attains better solution more quickly, while TVF is less efficient
and is prune to being trapped in local maxima. Also, TVF tends to generate more components than
necessary, while SVA-PM maintains a better performance using much less components.
7
Conclusion
We presented an online Bayesian learning algorithm to estimate DP mixture models. The proposed
method does not require random initialization. Instead, it can reliably and efficiently learn a DPMM
from scratch through sequential approximation in a single pass. The algorithm takes in data in a
streaming fashion, and thus can be easily adapted to new data. Experiments on both synthetic data
and real applications have demonstrated that our algorithm achieves remarkable speedup ? it can
attain nearly optimal configuration within seconds or minutes, while mainstream methods may take
hours or even longer. It is worth noting that the approximation is derived based on the predictive law
of DPMM. It is an interesting future direction to investigate how it can be generalized to a broader
family of BNP models, such as HDP, Pitman-Yor processes, and NRMIs [10].
8
References
[1] C. Antoniak. Mixtures of dirichlet processes with applications to bayesian nonparametric problems. The
Annals of Statistics, 2(6):1152?1174, 1974.
[2] Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and Alan Edelman. Julia: A fast dynamic language for
technical computing. CoRR, abs/1209.5145, 2012.
[3] David Blei, Ng Andrew, and Michael Jordan. Latent dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] David M. Blei and Michael I. Jordan. Variational methods for the Dirichlet process. In Proc. of ICML?04,
2004.
[5] Michael Bryant and Erik Sudderth. Truly nonparametric online variational inference for hierarchical dirichlet processes. In Proc. of NIPS?12, 2012.
[6] David B. Dahl. Sequentially-allocated merge-split sampler for conjugate and nonconjugate dirichlet process
mixture models, 2005.
[7] Nils Lid Hjort, Chris Holmes, Peter Muller, and Stephen G. Walker. Bayesian Nonparametrics: Principles
and Practice. Cambridge University Press, 2010.
[8] Matt Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. arXiv
eprints, 1206.7501, 2012.
[9] Michael C. Hughes, Emily B. Fox, and Erik B. Sudderth. Effective split-merge monte carlo methods for
nonparametric models of sequential data. 2012.
[10] Lancelot F. James, Antonio Lijoi, and Igor Pr?unster. Posterior analysis for normalized random measures
with independent increments. Scaninavian Journal of Stats, 36:76?97, 2009.
[11] Kenichi Kurihara, Max Welling, and Yee Whye Teh. Collapsed variational dirichlet process mixture models. In Proc. of IJCAI?07, 2007.
[12] Radford M. Neal. Markov Chain Sampling Methods for Dirichlet Process Mixture Models. Journal of
computational and graphical statistics, 9(2):249?265, 2000.
[13] David J. Nott, Xiaole Zhang, Christopher Yau, and Ajay Jasra. A sequential algorithm for fast fitting of
dirichlet process mixture models. In Arxiv: 1301.2897, 2013.
[14] Ian Porteous, Alex Ihler, Padhraic Smyth, and Max Welling. Gibbs Sampling for (Coupled) Infinite Mixture
Models in the Stick-breaking Representation. In Proc. of UAI?06, 2006.
[15] Carl Edward Rasmussen. The Infinite Gaussian Mixture Model. In Proc. of NIPS?00, 2000.
[16] Jayaram Sethuraman. A constructive definition of dirichlet priors. Statistical Sinica, 4:639?650, 1994.
[17] S.Jain and R.M. Neal. A split-merge markov chain monte carlo procedure for the dirichlet process mixture
model. Journal of Computational and Graphical Statistics, 13(1):158?182, 2004.
[18] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet Processes.
Journal of the American Statistical Association, 101(476):1566?1581, 2007.
[19] Y.W. Teh, K. Kurihara, and Max Welling. Collapsed Variational Inference for HDP. In Proc. of NIPS?07,
volume 20, 2007.
[20] Chong Wang and David Blei. A split-merge mcmc algorithm for the hierarchical dirichlet process. arXiv
eprints, 1201.1657, 2012.
[21] Chong Wang and David Blei. Truncation-free stochastic variational inference for bayesian nonparametric
models. In Proc. of NIPS?12, 2012.
[22] Chong Wang and David M Blei. Variational Inference for the Nested Chinese Restaurant Process. In Proc.
of NIPS?09, 2009.
[23] Chong Wang, John Paisley, and David Blei. Online variational inference for the hierarchical dirichlet
process. In AISTATS?11, 2011.
[24] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from
abbey to zoo. In Proc. of CVPR?10, 2010.
9
| 4968 |@word proportion:4 seek:1 tried:1 splitmerge:1 q1:1 recursively:1 configuration:3 series:1 contains:2 score:1 document:11 existing:4 written:1 readily:1 john:2 explorative:1 chicago:1 partition:7 remove:3 drop:1 plot:1 update:10 progressively:3 advancement:1 blei:9 provides:2 characterization:1 location:1 zhang:1 along:1 constructed:1 become:2 beta:1 edelman:1 fitting:1 nondeterministic:1 introduce:5 expected:1 behavior:1 examine:1 bnp:3 relying:1 cardinality:1 becomes:1 begin:3 notation:2 mass:1 what:1 substantially:1 minimizes:1 developed:3 informed:1 finding:1 every:1 ti:6 tackle:5 bryant:3 stick:3 exchangeable:1 unit:1 control:1 positive:1 negligible:2 local:7 tends:1 era:1 path:1 merge:13 plus:1 initialization:7 limited:1 bi:3 practical:2 testing:4 practice:7 hughes:1 bootstrap:1 procedure:4 lancelot:1 adapting:1 significantly:1 attain:1 word:2 get:4 cannot:1 undesirable:1 close:1 collapsed:3 yee:2 conventional:4 equivalent:2 map:1 center:1 demonstrated:1 straightforward:1 starting:1 emily:1 recovery:1 immediately:1 stats:1 fade:1 holmes:1 importantly:1 handle:2 increment:3 limiting:1 updated:2 construction:3 suppose:1 controlling:1 massive:4 play:1 annals:1 smyth:1 us:2 carl:1 amortized:1 trend:1 expensive:2 particularly:2 updating:4 recognition:1 database:3 observed:6 role:1 fly:5 wang:7 initializing:2 capture:2 sun:2 decrease:1 technological:1 removed:1 substantial:2 complexity:3 dhlin:1 dynamic:1 predictive:8 incur:1 upon:1 efficiency:1 basis:1 accelerate:1 joint:4 easily:1 k0:8 represented:2 various:1 train:1 jain:2 fast:3 describe:1 distinct:1 effective:2 monte:2 widely:1 cvpr:1 loglikelihood:2 drawing:2 statistic:4 tvf:7 online:8 obviously:1 beal:1 advantage:1 propose:5 product:1 neighboring:1 relevant:1 mixing:3 flexibility:1 normalize:1 ijcai:1 cluster:11 empty:1 optimum:1 produce:1 converges:1 unster:1 derive:4 develop:1 andrew:1 fixing:1 measured:1 progress:1 eq:9 edward:1 p2:1 implemented:1 indicate:1 come:2 direction:1 lijoi:1 merged:1 stochastic:10 require:4 fix:1 generalization:1 extension:1 exp:3 great:2 equilibrium:1 matthew:1 vary:1 early:1 dictionary:3 omitted:1 achieves:1 purpose:1 abbey:1 estimation:5 proc:9 torralba:1 wl:2 create:1 tf:1 weighted:1 hoffman:1 stefan:1 brought:1 clearly:1 gaussian:4 aim:1 nott:2 ck:17 avoid:1 pn:2 nrmi:1 rather:1 broader:1 derived:1 focus:2 improvement:2 vk:2 likelihood:12 check:1 attains:1 inference:18 streaming:2 vl:1 entire:2 typically:1 accumulated:1 overall:1 among:1 unobservable:1 issue:1 denoted:1 socalled:1 development:1 art:1 special:1 marginal:3 field:1 once:1 construct:1 ng:1 sampling:7 atom:7 identical:2 progressive:2 broad:1 look:1 icml:1 nearly:1 igor:1 future:2 distinguishes:1 randomly:2 divergence:1 replaced:2 maintain:1 ab:1 investigate:1 chong:5 introduces:1 mixture:26 truly:3 uncommon:1 devoted:1 held:3 chain:2 allocating:1 necessary:1 xy:1 fox:1 accommodated:1 re:1 desired:1 theoretical:1 mk:4 modeling:5 assignment:1 cost:2 subset:4 comprised:2 conducted:1 too:1 characterize:2 dir:4 params:1 synthetic:8 fundamental:1 michael:5 quickly:2 w1:1 opposed:2 choose:1 padhraic:1 yau:1 creating:1 resort:1 inefficient:1 american:1 account:1 wk:11 matter:1 satisfy:1 explicitly:2 depends:1 tion:1 try:1 lot:1 view:1 performed:1 start:1 maintains:2 accuracy:1 variance:1 characteristic:1 efficiently:2 yield:1 identify:1 bayesian:19 iid:1 mc:7 zoo:1 carlo:2 trajectory:1 worth:2 explain:1 reach:3 ed:2 definition:2 nonetheless:1 steadily:1 james:1 associated:1 ihler:1 stop:1 dataset:3 popular:2 realm:1 knowledge:1 color:1 improves:1 higher:1 follow:1 nonconjugate:1 improved:1 formulation:2 evaluated:2 nonparametrics:1 crp:2 hand:1 christopher:1 mode:1 brings:1 lda:1 matt:1 normalized:2 true:2 adequately:1 symmetric:1 iteratively:2 leibler:1 neal:3 during:3 nuisance:1 maintained:2 generalized:1 whye:2 demonstrate:2 julia:2 image:12 variational:27 recently:4 common:2 viral:1 empirically:2 exponentially:1 volume:1 discussed:1 association:1 m1:2 approximates:1 dahua:1 theirs:1 significant:2 cambridge:1 gibbs:2 paisley:2 pm:16 language:2 longer:4 similarity:2 mainstream:1 add:1 base:4 posterior:29 recent:1 moderate:1 driven:1 occasionally:1 hay:1 devise:1 muller:1 prune:5 surely:1 determine:1 paradigm:1 redundant:4 tfv:7 stephen:1 lik:3 multiple:3 reduces:1 stem:1 d0:3 alan:1 technical:2 adapt:1 characterized:1 long:1 lin:1 divided:1 mle:2 qi:2 prediction:1 variant:2 basic:1 oliva:1 circumstance:1 expectation:1 vision:3 ajay:1 arxiv:3 iteration:5 karpinski:1 c1:3 proposal:2 whereas:1 addition:2 interval:1 else:1 sudderth:3 grow:1 walker:1 allocated:2 unlike:2 pass:2 thisp:1 incorporates:1 jordan:3 leverage:1 noting:1 hjort:1 split:10 variety:1 xj:1 restaurant:2 zi:14 fit:1 identified:1 idea:1 enumerating:1 sva:30 whether:1 motivated:1 effort:1 peter:1 york:1 repeatedly:1 antonio:1 generally:3 useful:2 detailed:1 involve:1 transforms:2 nonparametric:14 amount:2 processed:1 category:2 reduced:1 generate:2 exist:1 notice:1 trapped:3 disjoint:1 correctly:1 discrete:1 key:2 redundancy:2 reliance:2 threshold:2 demonstrating:1 dahl:2 year:2 run:7 uncertainty:1 family:5 almost:1 decide:1 reasonable:1 patch:9 draw:1 decision:2 entirely:1 accelerates:1 hi:4 followed:1 refine:1 adapted:1 idf:1 alex:1 bp:1 x2:1 scene:3 generates:1 speed:1 pruned:1 jasra:1 speedup:1 developing:1 according:1 conjugate:12 kenichi:1 increasingly:1 lid:1 making:2 explained:1 gradually:1 pr:1 computationally:1 remains:2 discus:1 eventually:1 mechanism:3 needed:5 tractable:4 end:2 available:1 incurring:1 rewritten:1 apply:1 observe:1 hierarchical:4 generic:3 save:1 alternative:1 batch:3 shah:1 denotes:1 dirichlet:20 ensure:1 clustering:4 top:1 running:1 opportunity:1 maintaining:1 remaining:1 graphical:2 porteous:1 exploit:1 especially:3 chinese:2 approximating:1 yz:1 build:2 seeking:1 move:4 parametric:3 concentration:1 strategy:2 traditional:1 gradient:2 dp:20 distance:3 hmm:1 chris:1 topic:10 hdp:5 erik:2 mini:3 providing:1 equivalently:1 difficult:2 sinica:1 trace:2 slows:1 design:3 reliably:3 implementation:2 dpmm:12 perform:2 teh:3 observation:3 datasets:4 sm:8 markov:2 finite:2 inevitably:1 truncated:2 witness:1 incorporated:1 extended:4 looking:1 ttic:1 introduced:2 pred:2 namely:2 pair:5 required:1 david:10 z1:21 distinction:1 learned:2 hour:5 nip:5 address:1 proceeds:1 usually:2 below:7 challenge:2 including:1 memory:1 max:3 overlap:1 difficulty:2 rely:3 natural:2 nkt:1 indicator:1 scheme:2 brief:1 sethuraman:1 created:2 coupled:1 deviate:1 prior:7 review:1 understanding:1 removal:1 marginalizing:1 relative:3 law:3 interesting:2 allocation:3 remarkable:2 sufficient:1 article:1 principle:1 xiao:1 pi:2 heavy:1 row:1 truncation:9 free:3 rasmussen:1 jayaram:1 formal:1 side:1 institute:1 taking:1 pitman:1 yor:1 xn:4 world:2 vocabulary:1 qn:1 conservatively:1 computes:1 collection:1 avg:2 far:1 welling:3 approximate:9 kullback:1 sequentially:5 uai:1 unnecessary:6 corroborates:1 xi:23 iterative:4 latent:2 scratch:2 learn:5 improving:1 alg:2 complex:1 constructing:2 tween:1 did:1 aistats:1 big:1 arrival:1 x1:26 referred:1 fashion:1 ehinger:1 slow:1 sub:1 explicit:2 exponential:1 cgs:7 breaking:3 toyota:1 ian:1 theorem:1 emphasizing:1 formula:2 minute:2 specific:3 down:1 removing:1 sift:1 svm:1 incorporating:1 exists:1 sequential:16 adding:2 importance:2 gained:1 merging:1 corr:1 magnitude:1 eprints:2 conditioned:3 nk:2 suited:1 antoniak:1 explore:2 expressed:2 adjustment:1 radford:1 corresponds:1 nested:1 relies:2 extracted:1 jeff:1 considerable:1 change:1 typical:1 specifically:4 except:2 infinite:3 sampler:2 kurihara:2 lemma:3 called:2 total:1 pas:5 nil:1 indicating:1 constructive:2 evaluate:1 mcmc:7 tested:2 avoiding:1 |
4,385 | 4,969 | Memoized Online Variational Inference for
Dirichlet Process Mixture Models
Michael C. Hughes and Erik B. Sudderth
Department of Computer Science, Brown University, Providence, RI 02912
[email protected], [email protected]
Abstract
Variational inference algorithms provide the most effective framework for largescale training of Bayesian nonparametric models. Stochastic online approaches
are promising, but are sensitive to the chosen learning rate and often converge
to poor local optima. We present a new algorithm, memoized online variational
inference, which scales to very large (yet finite) datasets while avoiding the complexities of stochastic gradient. Our algorithm maintains finite-dimensional sufficient statistics from batches of the full dataset, requiring some additional memory but still scaling to millions of examples. Exploiting nested families of variational bounds for infinite nonparametric models, we develop principled birth and
merge moves allowing non-local optimization. Births adaptively add components
to the model to escape local optima, while merges remove redundancy and improve speed. Using Dirichlet process mixture models for image clustering and
denoising, we demonstrate major improvements in robustness and accuracy.
1 Introduction
Bayesian nonparametric methods provide a flexible framework for unsupervised modeling of structured data like text documents, time series, and images. They are especially promising for large
datasets, as their nonparametric priors should allow complexity to grow smoothly as more data is
seen. Unfortunately, contemporary inference algorithms do not live up to this promise, scaling
poorly and yielding solutions that represent poor local optima of the true posterior. In this paper, we
propose new scalable algorithms capable of escaping local optima. Our focus is on clustering data
via the Dirichlet process (DP) mixture model, but our methods are much more widely applicable.
Stochastic online variational inference is a promising general-purpose approach to Bayesian nonparametric learning from streaming data [1]. While individual steps of stochastic optimization
algorithms are by design scalable, they are extremely vulnerable to local optima for non-convex
unsupervised learning problems, frequently yielding poor solutions (see Fig. 2). While taking the
best of multiple runs is possible, this is unreliable, expensive, and ineffective in more complex
structured models. Furthermore, the noisy gradient step size (or learning rate) requires external parameters which must be fine-tuned for best performance, often requiring an expensive validation
procedure. Recent work has proposed methods for automatically adapting learning rates [2], but
these algorithms? progress on the overall variational objective remains local and non-monotonic.
In this paper, we present an alternative algorithm, memoized online variational inference, which
avoids noisy gradient steps and learning rates altogether. Our method is useful when all data may
not fit in memory, but we can afford multiple full passes through the data by processing successive
batches. The algorithm visits each batch in turn and updates a cached set of sufficient statistics which
accurately reflect the entire dataset. This allows rapid and noise-free updates to global parameters
at every step, quickly propagating information and speeding convergence. Our memoized approach
is generally applicable in any case batch or stochastic online methods are useful, including topic
models [1] and relational models [3], though we do not explore these here.
1
We further develop a principled framework for escaping local optima in the online setting, by integrating birth and merge moves within our algorithm?s coordinate ascent steps. Most existing meanfield algorithms impose a restrictive fixed truncation in the number of components, which is hard to
set a priori on big datasets: either it is too small and inexpressive, or too large and computationally
inefficient. Our birth and merge moves, together with a nested variational approximation to the posterior, enable adaptive creation and pruning of clusters on-the-fly. Because these moves are validated
by an exactly tracked global variational objective, we avoid potential instabilities of stochastic online split-merge proposals [4]. The structure of our moves is very different from split-merge MCMC
methods [5, 6]; applications of these algorithms have been limited to hundreds of data points, while
our experiments show scaling of memoized split-merge proposals to millions of examples.
We review the Dirichlet process mixture model and variational inference in Sec. 2, outline our novel
memoized algorithm in Sec. 3, and evaluate on clustering and denoising applications in Sec. 4.
2 Variational inference for Dirichlet process mixture models
The Dirichlet process (DP) provides a nonparametric prior for partitioning exchangeable datasets
into discrete clusters [7]. An instantiation G of a DP is an infinite collection of atoms, each of which
represents one mixture component. Component k has mixture weight wk sampled as follows:
G ? DP(?0 H),
G,
?
X
wk ??k ,
vk ? Beta(1, ?0 ),
wk = vk
k=1
k?1
Y
(1 ? v? ).
(1)
?=1
This stick-breaking process provides mixture weights and parameters. Each data item n chooses
an assignment zn ? Cat(w), and then draws observations xn ? F (?zn ). The data-generating
parameter ?k is drawn from a base measure H with natural parameters ?0 . We assume both H and
F belong to exponential families with log-normalizers a and sufficient statistics t:
n
o
n
o
p(?k | ?0 ) = exp ?T0 t0 (?k ) ? a0 (?0 ) ,
p(xn | ?k ) = exp ?Tk t(xn ) ? a(?k ) . (2)
For simplicity, we assume unit reference measures. The goal of inference is to recover stick-breaking
proportions vk and data-generating parameters ?k for each global mixture component k, as well as
discrete cluster assignments z = {zn }N
n=1 for each observation. The joint distribution is
p(x, z, ?, v) =
N
Y
F (xn | ?zn )Cat(zn | w(v))
n=1
?
Y
Beta(vk | 1, ?0 )H(?k | ?0 )
(3)
k=1
While our algorithms are directly applicable to any DP mixture of exponential families, our experiments focus on D-dimensional real-valued data xn , for which we take F to be Gaussian. For some
data, we consider full-mean, full-covariance analysis (where H is normal-Wishart), while other applications consider zero-mean, full-covariance analysis (where H is Wishart).
2.1 Mean-field variational inference for DP mixture models
To approximate the full (but intractable) posterior over variables z, v, ?, we consider a fullyfactorized variational distribution q, with individual factors from appropriate exponential families:1
q(z, v, ?) =
N
Y
n=1
q(zn ) = Cat(zn | r?n1 , . . . r?nK ),
q(zn |?
rn )
K
Y
? k ),
q(vk |?
?1 , ?
? 0 )q(?k |?
(4)
k=1
q(vk ) = Beta(vk | ?
? k1 , ?
? k0 ),
? k ).
q(?k ) = H(?k | ?
(5)
To tractably handle the infinite set of components available under the DP prior, we truncate the
discrete assignment factor to enforce q(zn = k) = 0 for k > K. This forces all data to be explained
by only the first K components, inducing conditional independence between observed data and any
global parameters vk , ?k with index k > K. Inference may thus focus exclusively on a finite set of
K components, while reasonably approximating the true infinite posterior for large K.
To ease notation, we mark variables with hats to distinguish parameters ?? of variational factors q from
parameters ? of the generative model p. In this way, ?k and ??k always have equal dimension.
1
2
Crucially, our truncation is nested: any learned q with truncation K can be represented exactly under
truncation K +1 by setting the final component to have zero mass. This truncation, previously advocated by [8, 4], has considerable advantages over non-nested direct truncation of the stick-breaking
process [7], which places artifically large mass on the final component. It is more efficient and
broadly applicable than an alternative trunction which sets the stick-breaking ?tail? to its prior [9].
Variational algorithms optimize the parameters of q to minimize the KL divergence from the true,
intractable posterior [7]. The optimal q maximizes the evidence lower bound (ELBO) objective L:
h
i
(6)
log p(x | ?0 , ?0 ) ? L(q) , Eq log p(x, v, z, ? | ?0 , ?0 ) ? log q(v, z, ?)
For DP mixtures of exponential family distributions, L(q) has a simple form. For each component k,
?k and expected sufficient statistic sk (x). All but one term in L(q) can
we store its expected mass N
then be written using only these summaries and expectations of the global parameters v, ?:
?k , Eq
N
N
hX
n=1
L(q) =
K
X
N
i X
r?nk ,
znk =
sk (x) , Eq
n=1
N
hX
n=1
N
i X
znk t(xn ) =
r?nk t(xn ),
(7)
n=1
?k Eq [a(?k )] + N
?k Eq [log wk (v)] ?
Eq [?k ]T sk (x) ? N
N
X
r?nk log r?nk
n=1
k=1
!
Beta(vk | 1, ?0 )
H(?k | ?0 )
+ Eq log
(8)
+ Eq log
?k )
q(vk | ?
? k1 , ?
? k0 )
q(?k | ?
P
Excluding the entropy term ? r?nk log r?nk which we discuss later, this bound is a simple linear
?k , sk (x). Given precomputed entropies and summaries, evaluation of
function of the summaries N
L(q) can be done in time independent of the data size N . We next review variational algorithms
for optimizing q via coordinate ascent, iteratively updating individual factors of q. We describe
algorithms in terms of two updates [1]: global parameters (stick-breaking proportions vk and datagenerating parameters ?k ), and local parameters (assignments of data to components zn ).
2.2 Full-dataset variational inference
Standard full-dataset variational inference [7] updates local factors q(zn | r?n ) for all observations
n = 1, . . . , N by visiting each item n and computing the fraction r?nk explained by component k:
r?nk
r?nk = exp Eq [log wk (v)] + Eq [log p(xn | ?k )] , r?nk = PK
.
(9)
?n?
?=1 r
? k ) for each component k. After computing
Next, we update global factors q(vk |?
?k1 , ?
? k0 ), q(?k |?
?
summary statistics Nk , sk (x) given the new r?nk via Eq. (7), the update equations become
?k ,
?
? k1 = ?1 + N
?
? k0 = ?0 +
K
X
?? ,
N
? k = ?0 + sk (x).
?
(10)
?=k+1
While simple and guaranteed to converge, this approach scales poorly to big datasets. Because
global parameters are updated only after a full pass through the data, information propagates slowly.
2.3 Stochastic online variational inference
Stochastic online (SO) variational inference scales to huge datasets [1]. Instead of analyzing all data
at once, SO processes only a subset (?batch?) Bt at each iteration t. These subsets are assumed
sampled uniformly at random from a larger (but fixed size N ) corpus. Given a batch, SO first
updates local factors q(zn ) for n ? Bt via Eq. (9). It then updates global factors via a noisy gradient
step, using sufficient statistics of q(zn ) from only the current batch. These steps optimize a noisy
function, which in expectation (with respect to batch sampling) converges to the true objective (6).
Natural gradient steps are computationally tractable for exponential family models, involving nearly
the same computations as the full-dataset updates [1]. For example, to update the variational param? k from (5) at iteration t, we first compute the global update given only data in the current batch,
eter ?
3
? ? = ?0 +
amplified to be at full-dataset scale: ?
k
N
|Bt | sk (Bt ). Then, we interpolate between this and
? ? + (1 ? ?t)?
? (t?1) . The learn? (t) ? ?t ?
final result: ?
k
k
k
the previous global parameters to arrive at the
ing rate ?t controls how ?forgetful? the algorithm is of previous values; if it decays at appropriate
rates, stochastic inference provably converges to a local optimum of the global objective L(q) [1].
This online approach has clear computational advantages and can sometimes yield higher quality
solutions than the full-data algorithm, since it conveys information between local and global parameters more frequently. However, performance is extremely sensitive to the learning rate decay
schedule and choice of batch size, as we demonstrate in later experiments.
3 Memoized online variational inference
Generalizing previous incremental variants of the expectation maximization (EM) algorithm [10], we
now develop our memoized online variational inference algorithm. We divide the data into B fixed
b
?
batches {Bb }B
b=1 . For each batch, we maintain memoized sufficient statistics Sk = [Nk (Bb ), sk (Bb )]
0
?
for each component k. We also track the full-dataset statistics Sk = [Nk , sk (x)]. These compact
summary statistics allow guarantees of correct full-dataset analysis while processing only one small
batch at a time. Our approach hinges on the fact that these sufficient statistics are additive: summaries of an entire dataset can be written exactly as the addition of summaries of distinct batches.
Note that our memoization of deterministic analyses of batches of data is distinct from the stochastic
memoization, or ?lazy? instantiation, of random variables in some Monte Carlo methods [11, 12].
Memoized inference proceeds by visiting (in random order) each distinct batch once in a full pass
through the data, incrementally updating the local and global parameters related to that batch b.
First, we update local parameters for the current batch (q(zn | r?n ) for n ? Bb ) via Eq. (9). Next, we
update cached global sufficient statistics for each component: we subtract the old (cached) summary
of batch b, compute a new batch-level summary, and add the result to the full-dataset summary:
h X
i
X
Sk0 ? Sk0 ? Skb , Skb ?
r?nk ,
r?nk t(xn ) , Sk0 ? Sk0 + Skb .
(11)
n?Bb
n?Bb
Finally, given the new full-dataset summary Sk0 , we update global parameters exactly as in Eq. (10).
Unlike stochastic online algorithms, memoized inference is guaranteed to improve the full-dataset
ELBO at every step. Correctness follows immediately from the arguments in [10]. By construction,
each local or global step will result in a new q that strictly increases the objective L(q).
In the limit where B = 1, memoized inference reduces to standard full-dataset updates. However,
given many batches it is far more scalable, while maintaining all guarantees of batch inference. Furthermore, it generally converges faster than the full-dataset algorithm due to frequently interleaving
global and local updates. Provided we can store memoized sufficient statistics for each batch (not
each observation), memoized inference has the same computational complexity as stochastic methods while avoiding noise and sensitivity to learning rates. Recent analysis of convex optimization
algorithms [13] demonstrated theoretical and practical advantages for methods that use cached fulldataset summaries to update parameters, as we do, instead of stochastic current-batch-only updates.
This memoized algorithm can compute the full-dataset objective L(q) exactly at any point (after
visiting all
Pitems once). To do so efficiently, we need to compute and store the assignment entropy
Hkb = ? n?Bb rnk log rnk after visiting each batch b. We also need to track the full-data entropy
PB
Hk0 = b=1 Hkb , which is additive just like the sufficient statistics and incrementally updated after
each batch. Given both Hk0 and Sk0 , evaluation of the full-dataset ELBO in Eq. (8) is exact and rapid.
3.1 Birth moves to escape local optima
We now propose additional birth moves that, when interleaved with conventional coordinate ascent
parameter updates, can add useful new components to the model and escape local optima. Previous
methods [14, 9, 4] for changing variational truncations create just one extra component via a ?split?
move that is highly-specialized to particular likelihoods. Wang and Blei [15] explore truncation
levels via a local collapsed Gibbs sampler, but samplers are slow to make large changes. In contrast,
our births add many components at once and apply to any exponential family mixture model.
4
Birth Move
After
Before
1
1
6
2 3
4
5
2
Batch 1
!
7
?kb
1) Create new
Memoized summary N
components expected count of each component
2) Adopt in one
pass thru data 1 2 3 4 5 6 7
Batch b
current position
Batch b+1
Learn fresh DP-GMM on
subsample via VB
Subsample data
explained by 1
0
0
0
0
0
0
0
0
0
0
!
Add fresh components to
expand original model
Batch B
3) Merge to remove
redundancy
batches not-yet updated
on this pass do not use
any new components
Figure 1: One pass through a toy dataset for memoized learning with birth and merge moves (MO-BM),
showing creation (left) and adoption (right) of new components. Left: Scatter plot of 2D observed data, and a
subsample targeted via the first mixture component. Elliptical contours show component covariance matrices.
?kb for each batch. Not shown: Memoized sufficient statistics sbk .
Right: Bar plots of memoized counts N
Creating new components in the online setting is challenging. Each small batch may not have
enough examples of a missing component to inspire a good proposal, even if that component is wellsupported by the full dataset. We thus advocate birth moves that happen in three phases (collection,
creation, and adoption) over two passes of the data. The first pass collects a targeted data sample
more likely to yield informative proposals than a small, predefined batch. The second pass, shown in
Fig. 1, creates new components and then updates every batch with the expanded model. Successive
births are interleaved; at each pass there are proposals both active and in preparation. We sketch out
each step of the algorithm below. For complete details, see the supplement.
Collection During pass 1, we collect a targeted subsample x? of the data, of size at most N ? = 104 .
This subsample targets a single component k ? . When visiting each batch, we copy data xn into x? if
r?nk? > ? (we set ? = 0.1). This threshold test ensures the subsample contains related data, but also
promotes diversity by considering data explained partially by other components k 6= k ? . Targeted
samples vary from iteration to iteration because batches are visited in distinct, random orders.
Creation Before pass 2, we create new components by fitting a DP mixture model with K ? (we
take K ? = 10) components to x? , running variational inference for a limited budget of iterations.
Taking advantage of our nested truncation, we expand our current model to include all K + K ? components, as shown in Fig. 1. Unlike previous work [9, 4], we do not immediately assess the change in
ELBO produced by these new components, and always accept them. We rely on subsequent merge
moves (Sec. 3.2) to remove unneeded components.
Adoption During pass 2, we visit each batch and perform local and global parameter updates for
the expanded (K + K ? )-component mixture. These updates use expanded global summaries S 0
that include summaries S ? from the targeted analysis of x? . This results in two interpretations of the
subset x? : assignment to original components (mostly k ? ) and assignment to brand-new components.
If the new components are favored, they will gain mass via new assignments made at each batch.
After the pass, we subtract away S ? to yield both S 0 and global parameters exactly consistent with
the data x. Any nearly-empty new component will likely be pruned away by later merges.
By adding many components at once, our birth move allows rapid escape from poor local optima.
Alone, births may sometimes cause a slight ELBO decrease by adding unnecessary components.
However, in practice merge moves reliably reject poor births and restore original configurations. In
Sec. 4, births are so effective that runs started at K = 1 recover necessary components on-the-fly.
3.2 Merge moves that optimize the full data objective
The computational cost of inference grows with the number of components K. To keep K small, we
develop merge moves that replace two components with a single merged one. Merge moves were
first explored for batch variational methods [16, 14]. For hierarchical DP topic models, stochastic
5
6
3
log evidence x10
log evidence x106
1.04
1.03
1.02
1.01
1
0.99
Full K=25
MO K=25
GreedyMerge
MO?BM K=1
6
9
12 15 18 21 24 27 30
num. passes thru data (N=100000)
Data: 5x5 patches
worst MO-BM
1.04
1.03
1.02
1.01
1
0.99
3
SOa K=25
SOb K=25
SOc K=25
6
9
12 15 18 21 24 27 30
num. passes thru data (N=100000)
worst MO
worst Full
best SOb
0.13
0.13
0.12
0.12
0.00
0.00
0.13
0.12
0.00
0.00
0.25
0.13
0.25
0.13
0.12
0.12
0.13
0.13
0.13
0.12
0.13
0.25
0.25
0.13
0.13
0.25
0.25
0.00
0.13
0.13
0.13
0.00
Figure 2: Comparison of full-data, stochastic (SO), and memoized (MO) on toy data with K = 8 true
components (Sec. 4.1). Top: Trace of ELBO during training across 10 runs. SO compared with learning rates
a,b,c. Bottom Left: Example patch generated by each component. Bottom: Covariance matrix and weights wk
found by one run of each method, aligned to true components. ?X?: no comparable component found.
variational inference methods have been augmented to evaluate merge proposals based on noisy,
single-batch estimates of the ELBO [4]. This can result in accepted merges that decrease the fulldata objective (see Sec. 4.1 for an empirical illustration). In contrast, our algorithm accurately
computes the full ELBO for each merge proposal, ensuring only useful merges are accepted.
Given two components ka , kb to merge, we form a candidate q ? with K ? 1 components, where
merged component km takes over all assignments to ka , kb : r?nkm = r?nka + r?nkb . Instead of computing new assignments explicitly, additivity allows direct construction of merged global sufficient
statistics: Sk0m = Sk0a + Sk0b . Merged global parameters follow from Eq. (10).
Our merge move has three steps: select components, form the candidate configuration q ? , and accept
q ? if the ELBO improves. Selecting ka , kb to merge at random is unlikely to yield an improved
configuration. After choosing ka at random, we select kb using a ratio of marginal likelihoods M
which compares the merged and separated configurations, easily computed with cached summaries:
M (Ska + Skb )
p(kb | ka ) ?
, M (Sk ) = exp a0 (?0 + sk (x)) .
(12)
M (Ska )M (Skb )
Our memoized approach allows exact evaluation of the full-data ELBO to compare the existing q to
merge candidate q ? . As shown in Eq. (8), evaluating L(q ? ) is a linear function of merged sufficient
PN
rnka + r?nkb ).
statistics, except for the assignment entropy term: Hab = ? n=1 (?
rnka + r?nkb ) log(?
We compute this term in advance for all possible merge pairs. This requires storing one set of
K(K ? 1)/2 scalars, one per candidate pair, for each batch. This modest precomputation allows
rapid and exact merge moves, which improve model quality and speed-up post-merge iterations.
In one pass of the data, our algorithm performs a birth, memoized ascent steps for all batches, and
several merges after the final batch. After a few passes, it recovers high-quality, compact structure.
4 Experimental results
We now compare algorithms for learning DP-Gaussian mixture models (DP-GMM), using our own
implementations of full-dataset, stochastic online (SO), and memoized online (MO) inference, as
well as our new birth-merge memoized algorithm (MO-BM). Code is available online. To examine
SO?s sensitivity to learning rate, we use a recommended [1] decay schedule ?t = (t + d)?? with
three diverse settings: a) ? = 0.5, d = 10, b) ? = 0.5, d = 100, and c)? = 0.9, d = 10.
4.1 Toy data: How reliably do algorithms escape local optima?
We first study N = 100000 synthetic image patches generated by a zero-mean GMM with 8 equallycommon components. Each one is defined by a 25 ? 25 covariance matrix producing 5 ? 5 patches
with a strong edge. We investigate whether algorithms recover the true K = 8 structure. Each fixedtruncation method runs from 10 fixed random initializations with K = 25, while MO-BM starts at
K = 1. Online methods traverse 100 batches (1000 examples per batch).
6
6
Random Initialization
?2.85
log evidence x10
log evidence x10
6
Smart (k-means++) Initialization
?2.9
?2.95
?3
?3.05
?3.1
SOa SOb SOc
Full
20 batches
100 batches
MO MO?BM Kuri
?3
?3.5
?4
?4.5
SOa SOb SOc
num. components K
Alignment accuracy
0.82
0.8
0.78
0.76
0.74
0.72
0.7
40 50 60 70 80 90 100 110
Effective num. components K
Full
20 batches
100 batches
MO MO?BM Kuri
100
80
60
40
Full
MO
MO?BM
0
0 40 80 120 160 200
num. pass thru data (N=60000)
20
Figure 3: MNIST. Top: Comparison of final ELBO for multiple runs of each method, varying initialization
and number of batches. Stochastic online (SO) compared at learning rates a,b,c. Bottom left: Visualization of
cluster means for MO-BM?s best run. Bottom center: Evaluation of cluster alignment to true digit label. Bottom
right: Growth in truncation-level K as more data visited with MO-BM.
Fig. 2 traces the training-set ELBO as more data arrives for each algorithm and shows estimated
covariance matrices for the top 8 components for select runs. Even the best runs of SO do not
recover ideal structure. In contrast, all 10 runs of our birth-merge algorithm find all 8 components,
despite initialization at K = 1. The ELBO trace plots show this method escaping local optima, with
slight drops indicating addition of new components followed by rapid increases as these are adopted.
They further suggest that our fixed-truncation memoized method competes favorably with full-data
inference, often converging to similar or better solutions after fewer passes through the data.
The fact that our MO-BM algorithm only performs merges that improve the full-data ELBO is crucial. Fig. 2 shows trace plots of GreedyMerge, a memoized online variant that instead uses only the
current-batch ELBO to assess a proposed merge, as done in [4]. Given small batches (1000 examples each), there is not always enough data to warrant many distinct 25 ? 25 covariance components.
Thus, this method favors merges that in fact remove vital structure. All 5 runs of this GreedyMerge
algorithm ruinously accept merges that decrease the full objective, consistently collapsing down to
just one component. Our memoized approach ensures merges are always globally beneficial.
4.2 MNIST digit clustering
We now compare algorithms for clustering N = 60000 MNIST images of handwritten digits 0-9.
We preprocess as in [9], projecting each image down to D = 50 dimensions via PCA. Here, we also
compare to Kurihara?s public implementation of variational inference with split moves [9]. MO-BM
and Kurihara start at K = 1, while other methods are given 10 runs from two K = 100 initialization
routines: random and smart (based on k-means++ [17]). For online methods, we compare 20 and
100 batches, and three learning rates. All runs complete 200 passes through the full dataset.
The final ELBO values for every run of each method are shown in Fig. 3. SO?s performance varies
dramatically across initialization, learning rate, and number of batches. Under random initialization, SO reaches especially poor local optima (note lower y-axis scale). In contrast, our memoized
approach consistently delivers solutions on par with full inference, with no apparent sensitivity to
the number of batches. With births and merges enabled, MO-BM expands from K = 1 to over 80
components, finding better solutions than every smart K = 100 initialization. MO-BM even outperforms Kurihara?s offline split algorithm, yielding 30-40 more components and higher ELBO values.
Altogether, Fig. 3 exposes SO?s extreme sensitivity, validates MO as a more reliable alternative, and
shows that our birth-merge algorithm is more effective at avoiding local optima.
Fig. 3 also shows cluster means learned by the best MO-BM run, covering many styles of each digit.
We further compute a hard segmentation of the data using the q(z) from smart initialization runs.
Each DP-GMM cluster is aligned to one digit by majority vote of its members. A plot of alignment
accuracy in Fig. 3 shows our MO-BM consistently among the best, with SO lagging significantly.
7
7
log evidence x10
?1.55
?1.56
?1.57
?1.58
?1.59
?1.6
?1.61
?1.62
5
SOa K=100
SOb K=100
Full K=100
MO K=100
MO?BM K=1
10 15 20 25 30 35 40 45 50
num. passes thru data (N=108754)
4.4
4.35
4.3
4.25
MO?BM K=1
MO K=100
SOa K=100
10 20 30 40 50 60 70 80 90 100
num. passes thru data (N=1880200)
300
MO?BM K=1
250
MO K=100
200 SOa K=100
150
100
50
0
0 10 20 30 40 50 60 70 80 90 100
num. passes thru data (N=1880200)
log evidence x109
4.45
num. components K
log evidence x108
Figure 4: SUN-397 tiny images. Left: ELBO during training. Right: Visualization of 10 of 28 learned clusters
for best MO-BM run. Each column shows two images from the top 3 categories aligned to one cluster.
2.06
2.04
2.02
2
1.98
1.96
4
MO?BM K=1
MO K=100
SOa K=100
8
12 16 20 24 28 32 36 40
num. passes thru data (N=8640000)
Figure 5: 8 ? 8 image patches. Left: ELBO during training, N = 1.88 million. Center: Effective truncationlevel K during training, N = 1.88 million. Right: ELBO during training, N = 8.64 million.
4.3 Tiny image clustering
We next learn a full-mean DP-GMM for tiny, 32 ? 32 images from the SUN-397 scene categories
dataset [18]. We preprocess all 108754 color images via PCA, projecting each example down to
D = 50 dimensions. We start MO-BM at K = 1, while other methods have fixed K = 100. Fig. 4
plots the training ELBO as more data is seen. Our MO-BM runs surpass all other algorithms.
To verify quality, Fig. 4 shows images from the 3 most-related scene categories for each of several
clusters found by MO-BM. For each learned cluster k, we rank all 397 categories to find those with
the largest fraction of members assigned to k via r??k . The result is quite sensible, with clusters for
tall free-standing objects, swimming pools and lakes, doorways, and waterfalls.
4.4 Image patch modeling
Our last experiment applies a zero-mean, full-covariance DP-GMM to learn the covariance structures of natural image patches, inspired by [19, 20]. We compare online algorithms on N = 1.88
million 8 ? 8 patches, a dense subsampling of all patches from 200 images of the Berkeley Segmentation dataset. Fig. 5 shows that our birth-merge memoized algorithm started at K = 1 can
consistently add useful components and reach better solutions than alternatives. We also examined
a much bigger dataset of N = 8.64 million patches, and still see advantages for our MO-BM.
Finally, we perform denoising on 30 heldout images, using code from [19]. Our best MO-BM run on
the 1.88 million patch dataset achieves PSNR of 28.537 dB, within 0.05 dB of the PSNR achieved
by [19]?s publicly-released GMM with K = 200 trained on a similar corpus. This performance is
visually indistinguishable, highlighting the practical value of our new algorithm.
5 Conclusions
Our novel memoized online variational algorithm avoids noisiness and sensitivity inherent in
stochastic methods. Our birth and merge moves successfully escape local optima. These innovations
are applicable to common nonparametric models beyond the Dirichlet process.
Acknowledgments This research supported in part by ONR Award No. N00014-13-1-0644. M. Hughes
supported in part by an NSF Graduate Research Fellowship under Grant No. DGE0228243.
8
References
[1] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14:1303?1347,
2013.
[2] R. Ranganath, C. Wang., D. Blei, and E. Xing. An adaptive learning rate for stochastic variational inference. In ICML, 2013.
[3] P. Gopalan, D. M. Mimno, S. Gerrish, M. J. Freedman, and D. M. Blei. Scalable inference of overlapping
communities. In NIPS, 2012.
[4] M. Bryant and E. Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet
processes. In NIPS, 2012.
[5] S. Jain and R.M. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process
mixture model. Journal of Computational and Graphical Statistics, 13(1):158?182, 2004.
[6] D. B. Dahl. Sequentially-allocated merge-split sampler for conjugate and nonconjugate Dirichlet process
mixture models. Submitted to Journal of Computational and Graphical Statistics, 2005.
[7] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixture models. Bayesian
Analysis, 1(1):121?144, 2006.
[8] Y. W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for HDP. In NIPS, 2008.
[9] K. Kurihara, M. Welling, and N. Vlassis. Accelerated variational Dirichlet process mixtures. In NIPS,
2006.
[10] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants. In Learning in graphical models, 1999.
[11] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov chain Monte Carlo methods for Dirichlet
process hierarchical models. Biometrika, 95(1):169?186, 2008.
[12] N. Goodman, V. Mansinghka, D. M. Roy, K. Bonawitz, and J. Tenenbaum. Church: A language for
generative models. In Uncertainty in Artificial Intelligence, 2008.
[13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence
rate for finite training sets. In NIPS, 2012.
[14] N. Ueda and Z. Ghahramani. Bayesian model search for mixture models based on optimizing variational
bounds. Neural Networks, 15(1):1223?1241, 2002.
[15] C. Wang and D. Blei. Truncation-free stochastic variational inference for Bayesian nonparametric models.
In NIPS, 2012.
[16] N. Ueda, R. Nakano, Z. Ghahramani, and G. Hinton. SMEM algorithm for mixture models. Neural
Computation, 12(9):2109?2128, 2000.
[17] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In ACM-SIAM Symposium
on Discrete Algorithms, pages 1027?1035, 2007.
[18] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR, 2010.
[19] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In
ICCV, 2011.
[20] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. In NIPS, 2012.
9
| 4969 |@word nkb:3 proportion:2 km:1 crucially:1 covariance:9 datagenerating:1 configuration:4 series:1 exclusively:1 contains:1 selecting:1 tuned:1 document:1 outperforms:1 existing:2 current:7 elliptical:1 ka:5 yet:2 scatter:1 must:1 written:2 additive:2 happen:1 informative:1 subsequent:1 remove:4 plot:6 drop:1 update:22 seeding:1 alone:1 generative:2 fewer:1 intelligence:1 item:2 leaf:1 blei:6 num:10 provides:2 successive:2 traverse:1 direct:2 beta:4 become:1 symposium:1 advocate:1 fitting:1 lagging:1 expected:3 rapid:5 frequently:3 examine:1 inspired:1 globally:1 automatically:1 param:1 considering:1 provided:1 notation:1 competes:1 maximizes:1 mass:4 finding:1 guarantee:2 berkeley:1 every:5 expands:1 growth:1 precomputation:1 bryant:1 exactly:6 biometrika:1 stick:5 exchangeable:1 control:1 partitioning:1 grant:1 unit:1 producing:1 before:2 local:27 limit:1 despite:1 analyzing:1 merge:30 initialization:10 examined:1 collect:2 challenging:1 ease:1 limited:2 graduate:1 adoption:3 practical:2 acknowledgment:1 hughes:2 practice:1 digit:5 procedure:2 empirical:1 adapting:1 reject:1 significantly:1 integrating:1 suggest:1 collapsed:2 live:1 instability:1 optimize:3 conventional:1 deterministic:1 demonstrated:1 missing:1 center:2 convex:2 simplicity:1 roux:1 immediately:2 enabled:1 handle:1 coordinate:3 updated:3 construction:2 target:1 exact:3 us:1 roy:1 expensive:2 recognition:1 updating:2 database:1 observed:2 bottom:5 fly:2 wang:4 worst:3 ensures:2 sun:3 decrease:3 contemporary:1 kuri:2 principled:2 complexity:3 trained:1 zoran:2 smart:4 creation:4 creates:1 easily:1 joint:1 k0:4 cat:3 represented:1 additivity:1 separated:1 distinct:5 jain:1 effective:5 describe:1 monte:3 artificial:1 choosing:1 birth:22 hkb:2 apparent:1 quite:1 widely:1 valued:1 larger:1 cvpr:1 elbo:21 favor:1 statistic:18 noisy:5 validates:1 final:6 online:25 advantage:6 propose:2 aligned:3 poorly:2 amplified:1 inducing:1 exploiting:1 convergence:2 cluster:12 optimum:15 empty:1 cached:5 generating:2 converges:3 incremental:2 tk:1 tall:1 object:1 develop:4 propagating:1 mansinghka:1 advocated:1 progress:1 eq:17 soc:3 strong:1 c:2 merged:6 correct:1 stochastic:22 kb:7 enable:1 public:1 hx:2 strictly:1 normal:1 exp:4 visually:1 mo:38 major:1 vary:1 adopt:1 achieves:1 released:1 abbey:1 purpose:1 torralba:1 applicable:5 label:1 visited:2 expose:1 sensitive:2 largest:1 correctness:1 create:3 successfully:1 hoffman:1 nkm:1 gaussian:3 always:4 avoid:1 pn:1 varying:1 validated:1 focus:3 waterfall:1 improvement:1 vk:12 consistently:4 likelihood:2 rank:1 noisiness:1 contrast:4 normalizer:1 inference:37 streaming:1 entire:2 bt:4 a0:2 accept:3 unneeded:1 unlikely:1 expand:2 provably:1 overall:1 among:1 flexible:1 priori:1 favored:1 marginal:1 field:1 equal:1 once:5 atom:1 sampling:1 represents:1 unsupervised:2 nearly:2 warrant:1 icml:1 escape:6 few:1 inherent:1 divergence:1 interpolate:1 individual:3 phase:1 n1:1 maintain:1 huge:1 highly:1 investigate:1 evaluation:4 alignment:3 mixture:24 arrives:1 extreme:1 yielding:3 truly:1 chain:2 predefined:1 edge:1 capable:1 necessary:1 arthur:1 modest:1 divide:1 old:1 theoretical:1 column:1 modeling:2 zn:14 assignment:11 maximization:1 restoration:1 cost:1 dge0228243:1 subset:3 hundred:1 x108:1 too:2 providence:1 varies:1 ska:2 chooses:1 adaptively:1 synthetic:1 sensitivity:5 siam:1 standing:1 pool:1 michael:1 together:1 quickly:1 reflect:1 slowly:1 wishart:2 collapsing:1 external:1 creating:1 dead:1 inefficient:1 sob:5 style:1 toy:3 potential:1 diversity:1 sec:7 wk:6 explicitly:1 later:3 view:1 start:3 recover:4 maintains:1 xing:1 minimize:1 ass:2 publicly:1 accuracy:3 efficiently:1 yield:4 preprocess:2 bayesian:6 handwritten:1 accurately:2 produced:1 carlo:3 zoo:1 submitted:1 reach:2 conveys:1 recovers:1 x106:1 sampled:2 gain:1 dataset:24 color:1 improves:1 psnr:2 segmentation:2 schedule:2 routine:1 higher:2 follow:1 nonconjugate:1 inspire:1 improved:1 wei:2 done:2 though:1 furthermore:2 just:3 sketch:1 mhughes:1 overlapping:1 incrementally:2 sbk:1 quality:4 grows:1 brown:3 requiring:2 true:8 verify:1 assigned:1 iteratively:1 neal:2 x5:1 during:7 indistinguishable:1 covering:1 outline:1 complete:2 demonstrate:2 performs:2 delivers:1 image:19 variational:37 novel:2 common:1 specialized:1 tracked:1 million:8 belong:1 tail:1 interpretation:1 slight:2 gibbs:1 paisley:1 language:1 add:6 base:1 posterior:5 own:1 recent:2 skb:5 optimizing:2 store:3 n00014:1 hay:1 onr:1 seen:2 additional:2 impose:1 converge:2 recommended:1 full:42 multiple:3 reduces:1 x10:4 ing:1 faster:1 fullyfactorized:1 bach:1 post:1 visit:2 promotes:1 bigger:1 award:1 ensuring:1 converging:1 scalable:4 involving:1 variant:3 oliva:1 expectation:3 iteration:6 represent:1 sometimes:2 achieved:1 eter:1 hab:1 proposal:7 addition:2 fellowship:1 fine:1 sudderth:3 grow:1 crucial:1 allocated:1 extra:1 goodman:1 unlike:2 ineffective:1 pass:11 ascent:4 db:2 member:2 jordan:1 ideal:1 split:8 enough:2 vital:1 independence:1 fit:1 escaping:3 t0:2 whether:1 vassilvitskii:1 pca:2 retrospective:1 afford:1 cause:1 dramatically:1 useful:5 generally:2 clear:1 gopalan:1 nonparametric:9 tenenbaum:1 category:4 nsf:1 estimated:1 track:2 per:2 broadly:1 diverse:1 discrete:4 promise:1 redundancy:2 threshold:1 pb:1 drawn:1 changing:1 gmm:7 dahl:1 swimming:1 fraction:2 run:19 uncertainty:1 place:1 family:7 arrive:1 ueda:2 patch:12 lake:1 draw:1 x109:1 scaling:3 vb:1 comparable:1 rnk:2 bound:4 interleaved:2 guaranteed:2 distinguish:1 followed:1 ri:1 scene:3 speed:2 argument:1 extremely:2 pruned:1 forgetful:1 expanded:3 department:1 structured:2 truncate:1 poor:6 conjugate:1 across:2 beneficial:1 em:2 explained:4 projecting:2 iccv:1 computationally:2 equation:1 visualization:2 remains:1 previously:1 turn:1 discus:1 precomputed:1 count:2 tractable:1 soa:7 adopted:1 available:2 apply:1 hierarchical:3 away:2 appropriate:2 enforce:1 batch:56 robustness:1 alternative:4 altogether:2 hat:1 schmidt:1 original:3 top:4 dirichlet:13 clustering:6 running:1 include:2 subsampling:1 graphical:3 hinge:1 maintaining:1 nakano:1 restrictive:1 k1:4 especially:2 ghahramani:2 approximating:1 move:21 objective:10 visiting:5 gradient:6 dp:16 majority:1 sensible:1 topic:2 fresh:2 hdp:1 erik:1 code:2 index:1 illustration:1 memoization:2 ratio:1 innovation:1 unfortunately:1 mostly:1 robert:1 favorably:1 trace:4 design:1 reliably:2 implementation:2 perform:2 allowing:1 teh:1 observation:4 datasets:6 markov:2 finite:4 relational:1 excluding:1 vlassis:1 hinton:2 rn:1 community:1 pair:2 kl:1 hk0:2 merges:10 learned:4 tractably:1 nip:7 beyond:1 bar:1 proceeds:1 below:1 memoized:30 including:1 memory:2 reliable:1 meanfield:1 natural:5 force:1 rely:1 restore:1 largescale:1 improve:4 sk0:6 smem:1 axis:1 started:2 church:1 speeding:1 thru:8 text:1 prior:4 review:2 par:1 heldout:1 validation:1 znk:2 sufficient:13 consistent:1 propagates:1 xiao:1 storing:1 tiny:3 summary:16 supported:2 last:1 free:3 truncation:12 copy:1 offline:1 allow:2 taking:2 sparse:1 mimno:1 dimension:3 xn:10 evaluating:1 avoids:2 contour:1 computes:1 collection:3 adaptive:2 made:1 bm:26 far:1 welling:2 bb:7 ranganath:1 pruning:1 approximate:1 compact:2 unreliable:1 keep:1 global:23 active:1 instantiation:2 sequentially:1 corpus:2 assumed:1 unnecessary:1 doorway:1 search:1 sk:13 bonawitz:1 promising:3 learn:4 reasonably:1 complex:1 pk:1 dense:1 big:2 noise:2 subsample:6 freedman:1 whole:1 augmented:1 fig:12 papaspiliopoulos:1 ehinger:1 slow:1 position:1 exponential:7 candidate:4 breaking:5 jmlr:1 interleaving:1 down:3 showing:1 explored:1 decay:3 evidence:8 intractable:2 mnist:3 adding:2 supplement:1 budget:1 justifies:1 nk:18 subtract:2 smoothly:1 entropy:5 generalizing:1 explore:2 likely:2 lazy:1 highlighting:1 partially:1 scalar:1 vulnerable:1 monotonic:1 applies:1 nested:5 gerrish:1 acm:1 conditional:1 goal:1 targeted:5 careful:1 replace:1 considerable:1 hard:2 change:2 infinite:4 except:1 uniformly:1 sampler:3 kurihara:5 denoising:3 surpass:1 pas:14 accepted:2 experimental:1 brand:1 vote:1 indicating:1 select:3 mark:1 accelerated:1 preparation:1 evaluate:2 mcmc:1 avoiding:3 |
4,386 | 4,970 | Regret based Robust Solutions for
Uncertain Markov Decision Processes
Asrar Ahmed
Singapore Management University
[email protected]
Pradeep Varakantham
Singapore Management University
[email protected]
Yossiri Adulyasak
Massachusetts Institute of Technology
[email protected]
Patrick Jaillet
Massachusetts Institute of Technology
[email protected]
Abstract
In this paper, we seek robust policies for uncertain Markov Decision Processes (MDPs). Most
robust optimization approaches for these problems have focussed on the computation of maximin
policies which maximize the value corresponding to the worst realization of the uncertainty. Recent
work has proposed minimax regret as a suitable alternative to the maximin objective for robust optimization. However, existing algorithms for handling minimax regret are restricted to models with
uncertainty over rewards only. We provide algorithms that employ sampling to improve across multiple dimensions: (a) Handle uncertainties over both transition and reward models; (b) Dependence
of model uncertainties across state, action pairs and decision epochs; (c) Scalability and quality
bounds. Finally, to demonstrate the empirical effectiveness of our sampling approaches, we provide comparisons against benchmark algorithms on two domains from literature. We also provide a
Sample Average Approximation (SAA) analysis to compute a posteriori error bounds.
Introduction
Motivated by the difficulty in exact specification of reward and transition models, researchers have
proposed the uncertain Markov Decision Process (MDP) model and robustness objectives in solving
these models. Given the uncertainty over the reward and transition models, a robust solution can
typically provide some guarantees on the worst case performance. Most of the research in computing robust solutions has assumed a maximin objective, where one computes a policy that maximizes
the value corresponding to the worst case realization [8, 4, 3, 1, 7]. This line of work has developed scalable algorithms by exploiting independence of uncertainties across states and convexity of
uncertainty sets. Recently, techniques have been proposed to deal with dependence of uncertainties [15, 6].
Regan et al. [11] and Xu et al. [16] have proposed minimax regret criterion [13] as a suitable alternative to maximin objective for uncertain MDPs. We also focus on this minimax notion of robustness
and also provide a new myopic variant of regret called Cumulative Expected Regret (CER) that
allows for development of scalable algorithms.
Due to the complexity of computing optimal minimax regret policies [16] , existing algorithms [12]
are restricted to handling uncertainty only in reward models and the uncertainties are independent
across states. Recent research has shown that sampling-based techniques [5, 9] are not only efficient
but also provide a priori (Chernoff-Hoeffiding bounds) and a posteriori [14] quality bounds for
planning under uncertainty.
In this paper, we also employ sampling-based approaches to address restrictions of existing approaches for obtaining regret-based solutions for uncertain MDPs . More specifically, we make the
1
following contributions: (a) An approximate Mixed Integer Linear Programming (MILP) formulation with error bounds for computing minimum regret solutions for uncertain MDPs, where the
uncertainties across states are dependent. We further provide enhancements and error bounds to
improve applicability. (b) We introduce a new myopic concept of regret, referred to as Cumulative
Expected Regret (CER) that is intuitive and that allows for development of scalable approaches.
(c) Finally, we perform a Sample Average Approximation (SAA) analysis to provide experimental
bounds for our approaches on benchmark problems from literature.
Preliminaries
We now formally define the two regret criterion that will be employed in this paper. In the definitions
below, we assume an underlying MDP, M = hS, A, T, R, Hi where a policy is represented as:
~? t = {? t , ? t+1 , . . . , ? H?1 }, the optimal policy as ~? ? and the optimal expected value as v 0 (~? ? ).
The maximum reward in any state s is denoted as R? (s) = maxa R(s, a). Throughout the paper,
we use ?(s) to denote the starting state distribution in state s and ? to represent the discount factor.
Definition 1 Regret for any policy ~? 0 is denoted by reg(~? 0 ) and is defined as:
X
reg(~? 0 ) = v 0 (~? ? ) ? v 0 (~? 0 ), where v 0 (~? 0 ) =
?(s) ? v 0 (s, ~? 0 ),
s
t
t
v (s, ~? ) =
X
i
h
X
T (s, a, s0 ) ? v t+1 (s0 , ~? t+1 )
? (s, a) ? R(s, a) + ?
t
s0
a
Extending the definitions of simple and cumulative regret in stochastic multi-armed bandit problems [2], we now define a new variant of regret called Cumulative Expected Regret (CER).
Definition 2 CER for policy ~? 0 is denoted by creg(~? 0 ) and is defined as:
X
creg(~? 0 ) =
?(s) ? creg 0 (s, ~? 0 ), where
s
t
t
creg (s, ~? ) =
X
X
? t (s, a) ? R? (s) ? R(s, a) + ?
T (s, a, s0 ) ? creg t+1 (s0 , ~? t+1 )
a
s0
(1)
The following properties highlight the dependencies between regret and CER.
h
i
H
)
Proposition 1 For a policy ~? 0 : 0 ? reg(~? 0 ) ? creg(~? 0 ) ? maxs R? (s) ? mins R? (s) ? (1??
1??
Proof Sketch1 By rewriting Equation (1) as creg(~? 0 ) = v 0,# (~? 0 ) ? v 0 (~? 0 ), we provide the proof.
Corollary 1 If ?s, s0 ? S : R? (s) = R? (s0 ), then ?~? 0 : creg(~? 0 ) = reg(~? 0 ).
Proof. Substituting maxs R? (s) = mins R? (s) in the result of Proposition 1, we have creg(~? 0 ) =
reg(~? 0 ).
Uncertain MDP
A finite horizon uncertain MDP is defined as the tuple of hS, A, T, R, Hi. S denotes the set of
states and A denotes the set of actions. T = ?? (T ) denotes a distribution over the set of transition
functions T , where Tkt (s, a, s0 ) denotes the probability of transitioning from state s ? S to state s0 ?
S on taking action a ? A at time step t according to the kth element in T . Similarly, R = ?? (R)
denotes the distribution over the set of reward functions R, where Rtk (s, a, s0 ) is the reinforcement
obtained on taking action a in state s and transitioning to state s0 at time t according to kth element
in R. Both T and R sets can have infinite elements. Finally, H is the time horizon.
In the above representation, every element of T and R represent uncertainty over the entire horizon
and hence this representation captures dependence in uncertainty distributions across states. We now
provide a formal definition for the independence of uncertainty distributions that is equivalent to the
rectangularity property introduced in Iyengar et al. [4].
1
Detailed proof provided in supplement under Proposition 1.
2
Definition 3 An uncertainty distribution ?? over the set of transition functions, T is independent
over state-action pairs at various decision epochs if
Y
t
k
t
?? (T ) = ?s?S,a?A,t?H ??,t
P r??,t
(Ts,a
)
s,a (Ts,a ), i.e. ?k, P r?? (T ) =
s,a
s,a,t
t
?s,a,t Ts,a
,
t
Ts,a
k
where T =
is the set of transition functions for s, a, t; ??,t
s,a is the distribution over
t
the set Ts,a and P r?? (T ) is the probability of the transition function T k given the distribution ?? .
We can provide a similar definition for the independence of uncertainty distributions over the reward functions. In the following definitions, we include transition, T and reward, R models as
subscripts to indicate value (v), regret (reg) and CER (creg) functions corresponding to a specific
MDP. Existing works on computation of maximin policies have the following objective:
X
0
? maximin = arg max
min
?(s) ? vT,R
(s, ~? 0 )
0
T ?T ,R?R
~
?
s
Our goal is to compute policies that minimize the maximum regret or cumulative regret over possible
models of transitional and reward uncertainty.
? reg = arg min
max regT,R (~? 0 ); ? creg = arg min
max cregT,R (~? 0 )
0
0
~
?
T ?T ,R?R
~
?
T ?T ,R?R
Regret Minimizing Solution
We will first consider the more general case of dependent uncertainty distributions. Our approach
to obtaining regret minimizing solution relies on sampling the uncertainty distributions over the
transition and reward models. We formulate the regret minimization problem over the sample set as
an optimization problem and then approximate it as a Mixed Integer Linear Program (MILP).
We now describe the representation of a sample and the definition of optimal expected value for
a sample, a key component in the computation of regret. Since there are dependencies amongst
?,t
uncertainties, we can only sample from ?? , ?? and not from ??,t
s,a , ?s,a . Thus, a sample is:
0 0
1 1
H?1 H?1
?q = { Tq , Rq , Tq , Rq , ? ? ? Tq
, Rq
}
where Tqt and Rtq refer to the transition and reward model respectively at time step t in sample q .
Let ~? t represent the policy for each time step from t to H ? 1 and the set of samples be denoted
by ?. Intuitively, that corresponds to |?| number of discrete MDPs and our goal is to compute one
policy that minimizes the regret over all the |?| MDPs, i.e.
X
? reg = arg min
max
?(s) ? [v??q (s) ? v?0q (s, ~? 0 )]
0
~
?
v??q
?q ??
s
v?0q (s, ~? 0 )
where
and
denote the optimal expected value and expected value for policy ~? 0 respectively of the sample ?q .
Let, ~? 0 be any policy corresponding to the sample ?q , then the expected value is defined as follows:
v?tq (s, ~? t ) =
X
? t (s, a) ? v?tq (s, a, ~? t ), where v?tq (s, a, ~? t ) = Rtq (s, a) + ?
X
v?t+1
(s0 , ~? t+1 ) ? Tqt (s, a, s0 )
q
s0
a
The optimization problem for computing the regret minimizing policy corresponding to sample set
? is then defined as follows:
min
reg(~? 0 )
~
?0
X
s.t. reg(~? 0 ) ? v??q ?
?(s) ? v?0q (s, ~? 0 )
??q
(2)
s
v?tq (s, ~? t )
=
X
t
? (s, a) ? v?tq (s, a, ~? t )
?s, ?q , t
(3)
?s, a, ?q , t
(4)
a
v?tq (s, a, ~? t ) = Rtq (s, a) + ?
X
v?t+1
(s0 , ~? t+1 ) ? Tqt (s, a, s0 )
q
s0
The value function expression in Equation (3) is a product of two variables, ? t (s, a) and
v?tq (s, a, ~? t ), which hampers scalability significantly. We now linearize these nonlinear terms.
3
Mixed Integer Linear Program
The optimal policy for minimizing maximum regret in the general case is randomized. However, to
account for domains which only allow for deterministic policies, we provide linearization separately
for the two cases of deterministic and randomized policies.
Deterministic Policy: In case of deterministic policies, we replace Equation (3) with the following
equivalent integer linear constraints:
v?tq (s, ~? t ) ? v?tq (s, a, ~? t ) ; v?tq (s, ~? t ) ? ? t (s, a) ? M
v?tq (s, ~? t ) ? v?tq (s, a, ~? t ) ? (1 ? ? t (s, a)) ? M ?s, a, ?q , t
(5)
v?tq (s, a, ~? t ).
t
M is a large positive constant that is an upper bound on
Equivalence to the product
terms in Equation (3) can be verified by considering all values of ? (s, a).
Randomized Policy: When ~? 0 is a randomized policy, we have a product of two continuous variables. We provide a mixed integer linear approximation to address the product terms above. Let,
v?t (s, a, ~? t ) ? ? t (s, a)
v?t (s, a, ~? t ) + ? t (s, a) t
; B?q (s, a, ~? t ) = q
At?q (s, a, ~? t ) = q
2
2
Equation (3) can then be rewritten as:
X
v?tq (s, ~? t ) =
[At?q (s, a, ~? t )2 ? B?tq (s, a, ~? t )2 ]
(6)
a
As discussed in the next subsection on ?Pruning dominated actions?, we can compute upper and
lower bounds for v?tq (s, a, ~? t ) and hence for At?q (s, a, ~? t ) and B?tq (s, a, ~? t ). We approximate the
squared terms by using piecewise linear components that provide an upper bound on the squared
terms. We employ a standard method from literature of dividing the variable range into multiple
break points. More specifically, we divide the overall range of At?q (s, a, ~? t ) (or B?tq (s, a, ~? t )), say
[br0 , brr ] into r intervals by using r+1 points namely hbr0 , br1 , . . . , brr i. We associate a linear vari2
2
able, ?t?q (s, a, w) with each break point w and then approximate At?q (s, a, ~? t ) (and B?tq (s, a, ~? t ) )
as follows:
X
At?q (s, a, ~? t ) =
?t?q (s, a, w) ? brw ,
?s, a, ?q , t
(7)
w
At?q (s, a, ~? t )2
=
X
?t?q (s, a, w) ? (brw )2 ,
?s, a, ?q , t
(8)
?s, a, ?q , t
(9)
w
X
?t?q (s, a, w) = 1,
w
t
SOS2s,a,t
?q ({??q (s, a, w)}w?r ),
?s, a, ?q , t
where SOS2 is a construct which is associated with a set of variables of which at most two variables
can be non-zero and if two variables are non-zero they must be adjacent. Since any number in the
range lies between at most two adjacent points, we have the above constructs for the ?t?q (s, a, w)
variables. We implement the above adjacency constraints on ?t?q (s, a, w) using the CPLEX Special
Ordered Sets (SOS) type 22 .
Proposition 2 Let [c,d] denote the range of values for At?q (s, a, ~? t ) and assume we have r + 1
points that divide At?q (s, a, ~? t )2 into r equal intervals of size =
error ? < 4 .
d2 ?c2
r
then the approximation
Proof: Let the r + 1 points be br0 , . . . , brr . By definition, we have (brw )2 = (brw?1 )2 + . Because
of the convexity of x2 function, the maximum approximation error in any interval [brw?1 , brw ]
occurs at its mid-point3 . Hence, approximation error ? is given by:
2
(brw )2 + (brw?1 )2
brw + brw?1
+ 2 ? brw?1 ? (brw?1 ? brw )
? ?
?
=
<
2
2
4
4
2
3
Using CPLEX SOS-2 considerably improves runtime compared to a binary variables formulation.
Proposition and proof provided in supplement as footnote 3
4
Proposition 3 Let v??tq (s, ~? t ) denote the approximation of v?tq (s, ~? t ). Then
v?tq (s, ~? t ) ?
|A| ? ? (1 ? ? H?1 )
|A| ? ? (1 ? ? H?1 )
? v??tq (s, ~? t ) ? v?tq (s, ~? t ) +
4 ? (1 ? ?)
4 ? (1 ? ?)
Proof Sketch4 : We use the approximation error provided in Proposition 2 and propagate it through
the value function update.
Corollary 2 The positive and negative errors in regret are bounded by
|A|??(1?? H?1 )
4?(1??)
Proof. From Equation (2) and Proposition 3, we have the proof.
Since the break points are fixed before hand, we can find tighter bounds (refer to Proof of Proposition 2). Also, we can further improve on the performance (both run-time and solution quality) of the
MILP by pruning out dominated actions and adopting clever sampling strategies as discussed in the
next subsections.
Pruning dominated actions
We now introduce a pruning approach5 to remove actions that will never be assigned a positive
probability in a regret minimization strategy. For every state-action pair at each time step, we define
a minimum and maximum value function as follows:
(s, a)
v?t,min
q
v?t,max
(s, a)
q
n
o
Tqt (s, a, s0 ) ? v?t+1,min
(s0 ) ; v?t,min
(s) = mina v?t,min
(s, a)
q
q
q
n
o
P
= Rtq (s, a) + ? s0 Tqt (s, a, s0 ) ? v?t+1,max
(s0 ) ; v?t,max
(s) = maxa v?t,max
(s, a)
q
q
q
= Rtq (s, a) + ?
P
s0
An action a0 is pruned if there exists the same action a over all samples ?q , such that
v?t,min
(s, a) ? v?t,max
(s, a0 ) ?a, ??q
q
q
The above pruning step follows from the observation that an action whose best case payoff is less
than the worst case payoff of another action a cannot be part of the regret optimal strategy, since we
could switch from a0 to a without increasing the regret value. It should be noted that an action that
is not optimal for any of the samples cannot be pruned.
Greedy sampling
The scalability of the MILP formulation above is constrained by the number of samples Q. So,
instead of generating only the fixed set of Q samples from the uncertainty distribution over models,
we generate more than Q samples and then pick a set of size Q so that samples are ?as far apart?
as possible. The key intuition in selecting the samples is to consider distance among samples as
being equivalent to entropy in the optimal policies for the MDPs in the samples. For each decision
epoch, t, each state s and action a, we define P r?s,a,t (???t (s, a) = 1) to be the probability that a is
the optimal action in state s at time t. Similarly, we define P r?s,a,t (???t (s, a) = 0):
P
P
?t
?t
1
?
?
(s,
a)
(s,
a)
?
?q
?q
?q ?q
P r?s,a,t (???t (s, a) = 1) =
; P r?s,a,t (???t (s, a) = 0) =
Q
Q
Let the total entropy of sample set, ? (|?| = Q) be represented as ?S(?), then
X X
?S(?) = ?
P r?s,a,t (???t (s, a) = z) ? ln P r?s,a,t (???t (s, a) = z)
t,s,a z?{0,1}
We use a greedy strategy to select the Q samples, i.e. we iteratively add samples that maximize
entropy of the sample set in that iteration.
It is possible to provide bounds on the number of samples required for a given error using the
methods suggested by Shapiro et al. [14]. However these bounds are conservative and as we show
in the experimental results section, typically, we only require a small number of samples.
4
5
Detailed proof in supplement under Proposition 3
Pseudo code provided in the supplement under ?Pruning dominated actions? section.
5
CER Minimizing Solution
The MILP based approach mentioned in the previous section can easily be adapted to minimize the
maximum cumulative regret over all samples when uncertainties across states are dependent:
min creg(~? 0 )
~
?0
s.t. creg(~? 0 ) ?
X
?(s) ? creg?0q (s, ~? t ),
??q
s
creg?tq (s, ~? t ) =
X
? t (s, a) ? creg?tq (s, a, ~? t ),
?s, t, ?q
(10)
?s, a, t, ?q
(11)
a
t
creg?tq (s, a, ~? t ) = R?,t
q (s) ? Rq (s, a) + ?
X
Tqt (s, a, s0 ) ? creg?t+1
(s0 , ~? t+1 ),
q
s0
where the product term ? t (s, a) ? creg?tq (s, a, ~? t ) is approximated as described earlier.
While we were unable to exploit the independence of uncertainty distributions across states with
minimax regret, we are able to exploit the independence with minimax CER. In fact, a key advantage
of the CER robustness concept in the context of independent uncertainties is that it has the optimal
substructure over time steps and hence a Dynamic Programming(DP) algorithm can be used to solve
it.
In the case of independent uncertainties, samples at each time step can be drawn independently and
we now introduce a formal notation to account for samples drawn at each time step. Let ? t denote
the set of samples at time step t, then ? = ?t?H?1 ? t . Further, we use ?~t to indicate cross product
of samples from t to H ? 1, i.e. ?~t = ?t?e?H?1 ? e . Thus, ?~0 = ?. To indicate the entire horizon
samples corresponding to a sample p from time step t, we have ?~pt = ?pt ? ?~t+1 .
?,t?1
(s) ? Rt?1
For notational compactness, we use ?Rt?1
p (s, a). Because of indepenp (s, a) = Rp
t?1
dence in uncertainties across time steps, for a sample set ?~p = ?pt?1 ? ?~t , we have the following:
X
? t?1 ) = max
max creg?~t?1
t?1 (s, ~
t?1
~p
?
t?1
t
?p
??p
p
= max
t?1
?p
X
h
i
X t
? t?1 (s, a) ?Rt?1
Tp (s, a, s0 ) ? creg?~t t (s0 , ~? t )
p (s, a) + ?
s0
a
?
t?1
h
i
X t
(s, a) ?Rt?1
Tp (s, a, s0 ) ? max creg?~t t (s0 , ~? t )
p (s, a) + ?
q
~t ??
~t
?
q
s0
a
(12)
Proposition 4 At time step t ? 1, the CER corresponding to any policy ? t?1 will have least regret
if it includes the CER minimizing policy from t. Formally, if ~? ?,t represents the CER minimizing
policy from t and ~? t represents any arbitrary policy, then:
t?1
t?1
?s : max creg~t?1
, ~? ?,t
? max creg~t?1
, ~? t
(13)
t?1 s, ?
t?1 s, ?
?~pt?1 ??~t?1
?p
if, ?s :
?~pt?1 ??~t?1
max creg?~t t (s, ~? ?,t )
q
~
?qt ??~t
?
?p
max creg?~t t (s, ~? t )
q
~
?qt ??~t
(14)
Proof Sketch6 We prove this by using Equations (14) and (12) in LHS of Equation (13).
It is easy to show that minimizing CER also has an optimal substructure:
X
X
min
max
?(s) ? creg?~00 (s, ~? 0 ) =? min
?(s) ? max creg?~00 (s, ~? 0 )
0
0
~
?
?~p0
s
~
?
p
s
?~p0
(15)
p
In Proposition 4 (extending the reasoning to t = 1), we have already shown that max?~0 creg?~00 (s, ~? 0 )
p
p
has an optimal substructure. Thus, Equation (15) can also exploit the optimal substructure.
M INIMIZE CER function below provides the pseudo code for a DP algorithm that exploits this structure. At each stage, t we calculate the creg for each state-action pair corresponding to each of the
6
Detailed proof in supplement under Proposition 4.
6
samples at that stage, i.e. ? t (lines 6-9). Once these are computed, we obtain the maximum creg and
the policy corresponding to it (line 10) using the G ET CER() function. In the next iteration, creg
computed at t is then used in the computation of creg at t ? 1 using the same update step (lines 6-9).
M INIMIZE CER()
1: for all t ? H ? 1 do
2:
? t ? G EN S AMPLES(T, R)
3: for all s ? S do
4:
creg H (s) ? 0
5: while t >= 0 do
6:
for all s ? S do
7:
for all ?qt ? ? t , a ? A do
8:
creg?tqt (s, a) ? ?Rqt (s, a)+
P
9:
? s0 Tqt (s, a, s0 ) ? creg t+1 (s0 )
t
10:
? , creg t (s) ? G ET CER (s, {creg?tqt (s, a)})
11:
t?t?1
return (creg
~ 0 , ~? 0 )
G ET CER (s, {creg?tqt (s, a)})
min creg t (s)
?
X t
creg t (s) ?
? (s, a) ? creg?tqt (s, a), ??qt
a
X
t
? (s, a) = 1
a
0 ? ? t (s, a) ? 1, ?a
It can be noted that M INIMIZE CER() makes only H ? |S| calls to the LP in G ET CER() function,
each of which has only |A| continuous variables and at most [1 + maxt |? t |] number of constraints.
Thus, the overall complexity of MinimizeCER() is polynomial in the number of samples given fixed
values of other attributes.
Let creg ?,H?1 (s, a) denote the optimal cumulative regret at time step H ? 1 for taking action a in
state s and creg??,H?1 (s, a) denote the optimal cumulative regret over the sample set ?. Let indicator
(
1 if creg ?,H?1 (s, a) ? creg??,H?1 (s, a) ? ?
random variable, X be defined as follows: X =
0 otherwise
By using Chernoff and Hoeffding bounds on X, it is possible to provide bounds on deviation from
mean and on the number of samples at H ? 1. This can then be propagated to H ? 2 and so on.
However, these bounds can be very loose and they do not exploit the properties of creg functions.
Bounds developed on spacings of order statistics can help exploit the properties of creg functions.
We will leave this for future work.
Experimental Results
In this section, we provide performance comparison of various algorithms introduced in previous
sections over two domains. MILP-Regret refers to the randomized policy variant of the MILP approximation algorithm for solving uncertain MDPs with dependent uncertainties. Similar one for
minimizing CER is referred to as MILP-CER. We refer to the dynamic programming algorithm
for minimizing CER in the independent uncertainty case as DP-CER and finally, we refer to the
maximin value algorithm as ?Maximin?. All the algorithms finished within 15 minutes on all the
problems. DP-CER was much faster than other algorithms and finished within a minute on the
largest problems.
We provide the following results in this section:
(1) Performance comparison of Greedy sampling and Random sampling strategies in the context of
MILP-Regret as we increase the number of samples.
(2) SAA analysis of the results obtained using MILP-Regret.
(3) Comparison of MILP-Regret and MILP-CER policies with respect to simulated regret.
(4) Comparison of DP-CER and Maximin.
The first three comparisons correspond to the dependent uncertainties case and the results are based
on a path planning problem that is motivated by disaster rescue and is a modification of the one
employed in Bagnell et al. [1]. On top of normal transitional uncertainty, we have uncertainty over
transition and reward models due to random obstacles and random reward cells. Furthermore, these
uncertainties are dependent on each other due to patterns in terrains. Each sample of the various
uncertainties represents an individual map and can be modelled as an MDP. We experimented with
7
SAA Analysis
Sampling Strategies (MILP-Regret)
0.5
Gap
0.3
1
0.5
0.2
0
0.1
?0.5
0
?1
10
15
Samples
(a)
20
1
2
3
2
1.5
0.4
5
400
25
5
10
15
20
25
Performance
Random
Greedy
0.6
Di?erence
DP-CER Analysis
2.5
0.7
DP-CER (0.3)
Maximin (0.3)
DP-CER (0.5)
Maximin (0.5)
DP-CER (0.7)
Maximin (0.7)
300
200
100
0
0.1
0.2
Samples
(b)
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cost-to-revenue ratio
(c)
Figure 1: In (a),(b) we have 4 ? 4 grid, H = 5. In (c), the maximum inventory size (X) = 50,
H = 20, |? t | = 50. The normal distribution mean ? = {0.3, 0.4, 0.5} ? X and ? ? min{?,X??}
3
a grid world of size 4x4 while varying numbers of obstacles, reward cells, horizon and the number
of break points employed (3-6).
In Figure 1a, we show the effect of using greedy sampling strategy on the MILP-Regret policy. On
X-axis, we represent the number of samples used for computation of policy (learning set). The test
set from which the samples were selected consisted of 250 samples. We then obtained the policies
using MILP-Regret corresponding to the sample sets (referred to as learning set) generated by using
the two sampling strategies. On Y-axis, we show the percentage difference between simulated regret
values on test and learning sample sets. We observe that for a fixed difference, the number of samples
required by greedy is significantly lower in comparison to random. Furthermore, the variance in
difference is also much lower for greedy. A key result from this graph is that even with just 15
samples, the difference with actual regret is less than 10%.
Figure 1b shows that even the gap obtained using SAA analysis7 is near zero (< 0.1) with 15
samples. We have shown the gap and variance on the gap over three different settings of uncertainty
labeled 1,2 and 3. Setting 3 has the highest uncertainty over the models and Setting 1 has the least
uncertainty. The variance over the gap was higher for higher uncertainty settings.
While MILP-CER obtained a simulated regret value (over 250 samples) within the bound provided
in Proposition 1, we were unable to find any correlation in the simulated regret values of MILPRegret and MILP-CER policies as the samples were increased. We have not yet ascertained a reason
for there being no correlation in performance.
In the last result shown in Figure 1c, we employ the well known single product finite horizon stochastic inventory control problem [10]. We compare DP-CER against the widely used benchmark algorithm on this domain, Maximin. The demand values at each decision epoch were taken from a
normal distribution. We considered three different settings of mean and variance of the demand.
As expected, the DP-CER approach provides much higher values than maximin and the difference
between the two reduced as the cost to revenue ratio increased. We obtained similar results when
the demands were taken from other distributions (uniform and bi-modal).
Conclusions
We have introduced scalable sampling-based mechanisms for optimizing regret and a new variant of
regret called CER in uncertain MDPs with dependent and independent uncertainties across states.
We have provided a variety of theoretical results that indicate the connection between regret and
CER, quality bounds on regret in case of MILP-Regret, optimal substructure in optimizing CER for
independent uncertainty case and run-time performance for MinimizeCER. In the future, we hope to
better understand the correlation between regret and CER, while also understanding the properties
of CER policies.
Acknowledgement This research was supported in part by the National Research Foundation Singapore through the Singapore MIT Alliance for Research and Technologys Future Urban Mobility
research programme. The last author was also supported by ONR grant N00014-12-1-0999.
7
We have provided the method for performing SAA analysis in the supplement.
8
References
[1] J. Andrew Bagnell, Andrew Y. Ng, and Jeff G. Schneider. Solving uncertain markov decision
processes. Technical report, Carnegie Mellon University, 2001.
[2] S?ebastien Bubeck, R?emi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits
problems. In Proceedings of the 20th international conference on Algorithmic learning theory,
Algorithmic Learning Theory, 2009.
[3] Robert Givan, Sonia Leach, and Thomas Dean. Bounded-parameter markov decision processes. Artificial Intelligence, 122, 2000.
[4] G. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30, 2004.
[5] Michael Kearns, Yishay Mansour, and Andrew Y. Ng. A sparse sampling algorithm for nearoptimal planning in large markov decision processes. Machine Learning, 49, 2002.
[6] Shie Mannor, Ofir Mebel, and Huan Xu. Lightning does not strike twice: Robust MDPs with
coupled uncertainty. In International Conference on Machine Learning (ICML), 2012.
[7] Andrew Mastin and Patrick Jaillet. Loss bounds for uncertain transition probabilities in markov
decision processes. In IEEE Annual Conference on Decision and Control (CDC), 2012, 2012.
[8] Arnab Nilim and Laurent El Ghaoui. Ghaoui, l.: Robust control of markov decision processes
with uncertain transition matrices. Operations Research, 2005.
[9] Joelle Pineau, Geoffrey J. Gordon, and Sebastian Thrun. Point-based value iteration: An
anytime algorithm for POMDPs. In International Joint Conference on Artificial Intelligence,
2003.
[10] Martin Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming.
John Wiley and Sons, 1994.
[11] Kevin Regan and Craig Boutilier. Regret-based reward elicitation for markov decision processes. In Uncertainty in Artificial Intelligence, 2009.
[12] Kevin Regan and Craig Boutilier. Robust policy computation in reward-uncertain MDPs using
nondominated policies. In National Conference on Artificial Intelligence (AAAI), 2010.
[13] Leonard Savage. The Foundations of Statistics. Wiley, 1954.
[14] A. Shapiro. Monte carlo sampling methods. In Stochastic Programming, volume 10 of Handbooks in Operations Research and Management Science. Elsevier, 2003.
[15] Wolfram Wiesemann, Daniel Kuhn, and Ber Rustem. Robust markov decision processes.
Mathematics of Operations Research, 38(1), 2013.
[16] Huan Xu and Shie Mannor. Parametric regret in uncertain markov decision processes. In IEEE
Conference on Decision and Control, CDC, 2009.
9
| 4970 |@word h:2 polynomial:1 d2:1 seek:1 propagate:1 p0:2 pick:1 selecting:1 daniel:1 existing:4 savage:1 yet:1 must:1 john:1 remove:1 update:2 greedy:7 selected:1 intelligence:4 wolfram:1 provides:2 mannor:2 c2:1 prove:1 introduce:3 expected:9 planning:3 multi:2 actual:1 armed:2 considering:1 increasing:1 provided:7 underlying:1 bounded:2 maximizes:1 notation:1 minimizes:1 maxa:2 developed:2 guarantee:1 pseudo:2 every:2 wiesemann:1 rustem:1 runtime:1 control:4 grant:1 positive:3 before:1 subscript:1 path:1 laurent:1 creg:49 twice:1 equivalence:1 range:4 bi:1 regret:56 implement:1 empirical:1 erence:1 significantly:2 refers:1 cannot:2 clever:1 context:2 restriction:1 equivalent:3 deterministic:4 map:1 dean:1 starting:1 independently:1 formulate:1 pure:1 handle:1 notion:1 pt:5 yishay:1 exact:1 programming:6 associate:1 element:4 approximated:1 labeled:1 capture:1 worst:4 calculate:1 transitional:2 highest:1 rq:4 intuition:1 mentioned:1 convexity:2 complexity:2 reward:17 dynamic:4 solving:3 smart:1 easily:1 joint:1 represented:2 various:3 describe:1 monte:1 artificial:4 milp:18 kevin:2 whose:1 widely:1 solve:1 say:1 otherwise:1 statistic:2 advantage:1 product:7 realization:2 intuitive:1 scalability:3 exploiting:1 enhancement:1 extending:2 generating:1 leave:1 help:1 linearize:1 andrew:4 qt:4 dividing:1 indicate:4 kuhn:1 attribute:1 stochastic:4 exploration:1 adjacency:1 require:1 givan:1 preliminary:1 proposition:14 tighter:1 considered:1 normal:3 algorithmic:2 substituting:1 largest:1 minimization:2 hope:1 mit:3 iyengar:2 varying:1 corollary:2 focus:1 notational:1 posteriori:2 elsevier:1 dependent:7 el:1 typically:2 entire:2 a0:3 compactness:1 bandit:2 arg:4 overall:2 among:1 denoted:4 priori:1 development:2 constrained:1 special:1 equal:1 construct:2 never:1 once:1 ng:2 sampling:15 chernoff:2 x4:1 represents:3 cer:40 icml:1 future:3 report:1 brr:3 piecewise:1 gordon:1 employ:4 national:2 hamper:1 individual:1 cplex:2 tq:31 ofir:1 pradeep:1 myopic:2 tuple:1 lh:1 ascertained:1 huan:2 mobility:1 varakantham:1 stoltz:1 mebel:1 divide:2 alliance:1 theoretical:1 uncertain:15 increased:2 earlier:1 obstacle:2 smu:2 tp:2 applicability:1 cost:2 deviation:1 uniform:1 nearoptimal:1 dependency:2 considerably:1 international:3 randomized:5 michael:1 squared:2 aaai:1 management:3 hoeffding:1 return:1 account:2 amples:1 includes:1 break:4 substructure:5 contribution:1 minimize:2 variance:4 correspond:1 modelled:1 craig:2 carlo:1 pomdps:1 researcher:1 footnote:1 sebastian:1 definition:10 against:2 proof:13 associated:1 di:1 propagated:1 massachusetts:2 subsection:2 anytime:1 improves:1 higher:3 modal:1 formulation:3 furthermore:2 just:1 stage:2 correlation:3 hand:1 nonlinear:1 pineau:1 quality:4 mdp:6 effect:1 concept:2 consisted:1 hence:4 assigned:1 iteratively:1 deal:1 puterman:1 adjacent:2 noted:2 criterion:2 mina:1 demonstrate:1 reasoning:1 br0:2 recently:1 volume:1 discussed:2 refer:4 mellon:1 grid:2 mathematics:2 similarly:2 lightning:1 specification:1 jaillet:3 patrick:2 add:1 recent:2 optimizing:2 apart:1 n00014:1 binary:1 onr:1 vt:1 joelle:1 leach:1 minimum:2 schneider:1 employed:3 maximize:2 strike:1 multiple:2 technical:1 faster:1 ahmed:1 cross:1 variant:4 scalable:4 iteration:3 represent:4 adopting:1 disaster:1 arnab:1 cell:2 separately:1 spacing:1 interval:3 shie:2 effectiveness:1 integer:5 call:1 near:1 easy:1 switch:1 independence:5 variety:1 motivated:2 regt:1 expression:1 action:20 boutilier:2 detailed:3 discount:1 mid:1 reduced:1 generate:1 shapiro:2 percentage:1 singapore:4 rescue:1 rtk:1 discrete:2 carnegie:1 key:4 drawn:2 urban:1 rewriting:1 verified:1 graph:1 tqt:11 run:2 uncertainty:42 throughout:1 decision:18 bound:20 hi:2 annual:1 adapted:1 constraint:3 x2:1 dence:1 dominated:4 emi:1 min:17 pruned:2 performing:1 martin:1 according:2 across:10 son:1 lp:1 modification:1 tkt:1 restricted:2 intuitively:1 ghaoui:2 taken:2 ln:1 equation:9 loose:1 mechanism:1 inimize:3 operation:4 rewritten:1 observe:1 alternative:2 robustness:3 sonia:1 rp:1 thomas:1 denotes:5 top:1 include:1 exploit:6 rqt:1 objective:5 already:1 occurs:1 strategy:8 parametric:1 dependence:3 rt:4 bagnell:2 amongst:1 kth:2 dp:11 distance:1 unable:2 simulated:4 thrun:1 reason:1 code:2 ratio:2 minimizing:10 robert:1 negative:1 ebastien:1 policy:38 perform:1 gilles:1 upper:3 observation:1 markov:12 benchmark:3 finite:2 t:5 payoff:2 mansour:1 arbitrary:1 introduced:3 pair:4 namely:1 required:2 connection:1 maximin:14 address:2 able:2 suggested:1 elicitation:1 below:2 pattern:1 program:2 max:22 suitable:2 difficulty:1 indicator:1 minimax:7 improve:3 technology:3 mdps:11 finished:2 axis:2 coupled:1 epoch:4 sg:2 literature:3 understanding:1 acknowledgement:1 loss:1 highlight:1 cdc:2 mixed:4 regan:3 geoffrey:1 revenue:2 foundation:2 s0:35 maxt:1 supported:2 last:2 formal:2 allow:1 understand:1 ber:1 institute:2 focussed:1 taking:3 munos:1 sparse:1 dimension:1 transition:13 cumulative:8 world:1 computes:1 author:1 reinforcement:1 saa:6 far:1 programme:1 approximate:4 pruning:6 handbook:1 assumed:1 br1:1 terrain:1 continuous:2 robust:11 obtaining:2 inventory:2 domain:4 xu:3 referred:3 en:1 brw:13 wiley:2 nilim:1 lie:1 minute:2 transitioning:2 specific:1 experimented:1 exists:1 supplement:6 linearization:1 horizon:6 demand:3 gap:5 entropy:3 bubeck:1 ordered:1 corresponds:1 relies:1 goal:2 leonard:1 jeff:1 replace:1 specifically:2 infinite:1 kearns:1 conservative:1 called:3 total:1 experimental:3 formally:2 select:1 reg:10 handling:2 |
4,387 | 4,971 | Improved and Generalized Upper Bounds on
the Complexity of Policy Iteration
Bruno Scherrer
Inria, Villers-l`es-Nancy, F-54600, France
Universit?e de Lorraine, LORIA, UMR 7503, Vandoeuvre-l`es-Nancy, F-54506, France
[email protected]
Abstract
Given a Markov Decision Process (MDP) with n states and m actions per
state, we study the number of iterations needed by Policy Iteration (PI)
algorithms to converge to the optimal ?-discounted optimal policy. We consider two variations of PI: Howard?s PI that changes the actions in all states
with a positive advantage, and Simplex-PI that only changes the action in
the state with maximal ?advantage.
PI
1 We2?show that
1 Howard?s
1
22 terminates
1
1
nm
1
after at most n(m ? 1) 1??
log 1??
= O 1??
log 1??
iterations,
improving by a factor O(log 1
n) a result by1 [3], while
22 Simplex-PI
1 2 terminates
1
22
2
1
m
1
2
after at most n (m ? 1) 1 + 1?? log 1??
= O n1??
log 1??
iterations, improving by a factor O(log n) a result by [11]. Under some
structural assumptions of the MDP, we then consider bounds that are
independent of the discount factor ?: given a measure of the maximal transient time ?t and the maximal time ?r to revisit states in recurrent classes
under all policies, we show that Simplex-PI
terminates after at most n2 (m?
#
$
1) (??r log(n?r )? + ??r log(n?t )?) (m ? 1)?n?t log(n?t )? + ?n?t log(n2 ?t )? =
!
"
? n3 m2 ?t ?r iterations. This generalizes a recent result for determinO
istic MDPs by [8], in which ?t ? n and ?r ? n. We explain why
similar results seem hard to derive for Howard?s PI. Finally, under
the additional (restrictive) assumption that the state space is partitioned in two sets, respectively states that are transient and recurrent
for all policies, we show that Howard?s PI terminates after at most
?
n(m ? 1) (??t log n?t ? + ??r log n?r ?) = O(nm(?
t + ?r )) iterations while
Simplex-PI terminates after n(m ? 1) (?n?t log n?t ? + ??r log n?r ?) =
? 2 m(?t + ?r )) iterations.
O(n
1
Introduction
We consider a discrete-time dynamic system whose state transition depends on a control.
We assume that there is a state space X of finite size n. At state i ? {1, .., n}, the control is
chosen from a control space A of finite size1 m. The control a ? A specifies the transition
probability pij (a) = P(it+1 = j|it = i, at = a) to the next state j. At each transition,
the system is given a reward r(i, a, j) where r is the instantaneous reward function. In
this context, we look for a stationary deterministic policy (a function ? : X ? A that maps
1
In the works of [11, 8, 3] that we reference, the integer ?m? denotes the total number of actions,
that is nm with our notation. When we restate their result, we do it with our own notation, that
is we replace their ?? m?? by ?? nm?? .
1
states into controls2 ) that maximizes the expected discounted sum of rewards from any state
i, called the value of policy ? at state i:
C ?
D
?
k
v? (i) := E
? r(ik , ak , ik+1 )- i0 = i, ?k ? 0, ak = ?(ik ), ik+1 ? P(?|ik , ak )
k=0
where ? ? (0, 1) is a discount factor. The tuple ?X, A, p, r, ?? is called a Markov Decision
Process (MDP) [9, 1], and the associated problem is known as optimal control.
The optimal value starting from state i is defined as
v? (i) := max v? (i).
?
For any policy ?, we write P? for the n ? n
qstochastic matrix whose elements are pij (?(i))
and r? the vector whose components are j pij (?(i))r(i, ?(i), j). The value functions v?
and v? can be seen as vectors on X. It is well known that v? is the solution of the following
Bellman equation:
v? = r? + ?P? v? ,
that is v? is a fixed point of the affine operator T? : v ?? r? + ?P? v. It is also well known
that v? satisfies the following Bellman equation:
v? = max(r? + ?P? v? ) = max T? v?
?
?
where the max operator is componentwise. In other words, v? is a fixed point of the nonlinear
operator T : v ?? max? T? v. For any value vector v, we say that a policy ? is greedy with
respect to the value v if it satisfies:
? ? arg max
T? ? v
?
?
or equivalently T? v = T v. With some slight abuse of notation, we write G(v) for any policy
that is greedy with respect to v. The notions of optimal value function and greedy policies
are fundamental to optimal control because of the following property: any policy ?? that is
greedy with respect to the optimal value v? is an optimal policy and its value v?? is equal
to v? .
Let ? be some policy. We call advantage with respect to ? the following quantity:
a? = max
T?? v? ? v? = T v? ? v? .
?
?
We call the set of switchable states of ? the following set
S? = {i, a? (i) > 0}.
Assume now that ? is non-optimal (this implies that S? is a non-empty set). For any
non-empty subset Y of S? , we denote switch(?, Y ) a policy satisfying:
;
G(v? )(i) if i ? Y
?i, switch(?, Y )(i) =
?(i)
if i ?? Y.
The following result is well known (see for instance [9]).
Lemma 1. Let ? be some non-optimal policy. If ? ? = switch(?, Y ) for some non-empty
subset Y of S? , then v?? ? v? and there exists at least one state i such that v?? (i) > v? (i).
This lemma is the foundation of the well-known iterative procedure, called Policy Iteration
(PI), that generates a sequence of policies (?k ) as follows.
?k+1 ? switch(?k , Yk ) for some set Yk such that ? ( Yk ? S?k .
The choice for the subsets Yk leads to different variations of PI. In this paper we will focus
on two specific variations:
2
Restricting our attention to stationary deterministic policies is not a limitation. Indeed, for the
optimality criterion to be defined soon, it can be shown that there exists at least one stationary
deterministic policy that is optimal [9].
2
? When for all iterations k, Yk = S?k , that is one switches the actions in all states with
positive advantage with respect to ?k , the above algorithm is known as Howard?s
PI; it can be seen then that ?k+1 ? G(v?k ).
? When for all k, Yk is a singleton containing a state ik ? arg maxi a?k (i), that is if
we only switch one action in the state with maximal advantage with respect to ?k ,
we will call it Simplex-PI3 .
Since it generates a sequence of policies with increasing values, any variation of PI converges
to the optimal policy in a number of iterations that is smaller than the total number of
policies mn . In practice, PI converges in very few iterations. On random MDP instances,
convergence often occurs in time sub-linear in n. The aim of this paper is to discuss existing
and provide new upper bounds on the number of iterations required by Howard?s PI and
Simplex-PI that are much sharper than mn .
In the next sections, we describe some known results?see [11] for a recent and comprehensive
review?about the number of iterations required by Howard?s PI and Simplex-PI, along with
some of our original improvements and extensions.4
2
Bounds with respect to a Fixed Discount Factor ? < 1
A key observation for both algorithms, that will be central to the results we are about to
discuss, is that the sequence they generate satisfies some contraction property5 . For any
vector u ? Rn , let ?u?? = max1?i?n |u(i)| be the max-norm of u. Let 1 be the vector of
which all components are equal to 1.
Lemma 2 (Proof in Section A). The sequence (?v? ? v?k ?? )k?0 built by Howard?s PI is
contracting with coefficient ?.
Lemma 3 (Proof in Section B). The sequence (1T (v? ? v?k ))k?0 built by Simplex-PI is
contracting with coefficient 1 ? 1??
n .
Though this observation is widely known for Howard?s PI, it was to our knowledge never
mentionned explicitly in the literature for Simplex-PI. These contraction properties have
the following immediate consequence6 .
?r? ??
Corollary 1. Let Vmax = max?1??
be an upper bound on ?v? ?? for all policies ?. In
order to get an ?-optimal
policy,
that
is
a policy ?k satisfying ?v? ? v?k ?? ???, Howard?s
?
?
?
PI requires at most
iterations.
log Vmax
?
1??
iterations, while Simplex-PI requires at most
n log nVmax
?
1??
These bounds depend on the precision term ?, which means that Howard?s PI and SimplexPI are weakly polynomial for a fixed discount factor ?. An important breakthrough was
recently achieved by [11] who proved that one can remove the dependency with respect to ?,
and thus show that Howard?s PI and Simplex-PI are strongly polynomial for a fixed discount
factor ?.
Theorem
?
11 ([11]).
2? Simplex-PI and Howard?s PI both terminate after at most n(m ?
1)
n
1??
log
n2
1??
iterations.
3
In this case, PI is equivalent to running the simplex algorithm with the highest-pivot rule on a
linear program version of the MDP problem [11].
4
For clarity, all proofs are deferred to the Appendix. The first proofs about bounds for the
case ? < 1 are given in the Appendix of the paper. The other proofs, that are more involved, are
provided in the Supplementary Material.
5
A sequence of non-negative numbers (xk )k?0 is contracting with coefficient ? if and only if for
all k ? 0, xk+1 ? ?xk .
6
For Howard?s PI, we have: ?v? ?v?k ?? ? ? k ?v? ?v?0 ?? ? ? k Vmax . Thus, a sufficient condition
for ?v? ? v?k ?? < ? is ? k Vmax < ?, which is implied by k ?
!
have ?v? ? v?k ?? ? ?v? ? v?k ?1 ? 1 ?
is similar to that for Howard?s PI.
1??
n
"k
log max
?
1??
V
!
?v? ? v?0 ?1 ? 1 ?
3
>
1??
n
log
"
Vmax
?
. For Simplex-PI, we
1
log ?
k
nVmax , and the conclusion
The proof is based on the fact that PI corresponds to the simplex algorithm in a linear
programming formulation of the MDP problem. Using a more direct proof, [3] recently
improved the result by a factor O(n) for Howard?s PI.
?
1
2?
1
n
Theorem 2 ([3]). Howard?s PI terminates after at most (nm + 1) 1??
log 1??
iterations.
Our first two results, that are consequences of the contraction properties (Lemmas 2 and
3), are stated in the following theorems.
Theorem
in Section C). Howard?s PI terminates after at most n(m ?
?
13 (Proof
2?
1)
1
1??
log
1
1??
iterations.
Theorem
in Section D). Simplex-PI terminates after at most n(m ?
?
14 (Proof
2?
n
n
1) 1?? log 1??
iterations.
Our result for Howard?s PI is a factor O(log n) better than the previous best result of [3].
Our result for Simplex-PI is only very slightly better (by a factor 2) than that of [11], and
uses a proof that is more direct. Using more refined argument, we managed to also improve
the bound for Simplex-PI by a factor O(log n).
2
Theorem
5 (Proof
1
2 in Section E). Simplex-PI terminates after at most n (m ?
1) 1 +
2
1??
1
log 1??
iterations.
Compared to Howard?s PI, our bound for Simplex-PI is a factor O(n) larger. However, since
one changes only one action per iteration, each iteration may have a complexity lower by a
factor n: the update of the value can be done in time O(n2 ) through the Sherman-Morrisson
formula, though in general each iteration of Howard?s PI, which amounts to compute the
value of some policy that may be arbitrarily different from the previous policy, may require
O(n3 ) time. Overall, both algorithms seem to have a similar complexity.
It is easy to see that the linear dependency of the bound for Howard?s PI with respect to
n is optimal. We conjecture that the linear dependency of both bounds with respect to
1
m is also optimal. The dependency with respect to the term 1??
may be improved, but
removing it is impossible for Howard?s PI and very unlikely for Simplex-PI. [2] describes an
MDP for which Howard?s PI requires an exponential (in n) number of iterations for ? = 1
and [5] argued that this holds also when ? is in the vicinity of 1. Though a similar result
does not seem to exist for Simplex-PI in the literature, [7] consider four variations of PI
that all switch one action per iteration, and show through specifically designed MDPs that
they may require an exponential (in n) number of iterations when ? = 1.
3
Bounds for Simplex-PI that are independent of ?
In this section, we will describe some bounds that do not depend on ? but that will be
based on some structural assumptions of the MDPs. On this topic, [8] recently showed the
following result for deterministic MDPs.
Theorem 6 ([8]). If the MDP is deterministic, then Simplex-PI terminates after at most
O(n5 m2 log2 n) iterations.
Given a policy ? of a deterministic MDP, states are either on cycles or on paths induced by
?. The core of the proof relies on the following lemmas that altogether show that cycles are
created regularly and that significant progress is made every time a new cycle appears; in
other words, significant progress is made regularly.
Lemma 4. If the MDP is deterministic, after at most nm?2(n ? 1) log n? iterations, either
Simplex-PI finishes or a new cycle appears.
Lemma 5. If the MDP is deterministic, when Simplex-PI moves from ? to ? ? where ? ?
involves a new cycle, we have
3
4
1
T
?
1 (v?? ? v? ) ? 1 ?
1T (v?? ? v? ).
n
4
Indeed, these observations suffice to prove7 that Simplex-PI terminates after
n
? 4 m2 ). Removing completely the dependency with respect to the
O(n4 m2 log 1??
) = O(n
1
discount factor ??the term in O(log 1??
)?requires a careful extra work described in [8],
which incurs an extra term of order O(n log(n)).
At a more technical level, the proof of [8] critically relies on some properties of the vector x? = (I ? ?P?T )?1 1 that provides a discounted measure of state visitations along the
trajectories induced by a policy ? starting from a uniform distribution:
?i ? X, x? (i) = n
?
?
t=0
? t P(it = i | i0 ? U, at = ?(it )),
where U denotes the uniform
on the state space X. For any policy ? and state
1 distribution
2
n
i, we trivially have x? (i) ? 1, 1??
. The proof exploits the fact that x? (i) belongs to the
1
n
set (1, n) when i is on a path of ?, while x? (i) belongs to the set ( 1??
, 1??
) when i is on
a cycle of ?. As we are going to show, it is possible to extend the proof of [8] to stochastic
MDPs. Given a policy ? of a stochastic MDP, states are either in recurrent classes or
transient classes (these two categories respectively generalize those of cycles and paths).
We will consider the following structural assumption.
Assumption 1. Let ?t ? 1 and ?r ? 1 be the smallest constants such that for all policies
? and all states i,
(1 ? )x? (i) ? ?t
3
4
n
n
? x? (i)
?
(1 ? ?)?r
1??
if i is transient for ?, and
if i is recurrent for ?.
The constant ?t (resp. ?r ) can be seen as a measure of the time needed to leave transient
states (resp. the time needed to revisit states in recurrent classes). In particular, when ?
tends to 1, it can be seen that ?t is an upper bound of the expected time L needed to ?Leave
the set of transient states?, since for any policy ?,
lim ?t ?
??1
1
lim
n ??1
i
?
transient for
x? (i) =
?
?
?
t=0
P(it transient for ? | i0 ? U, at = ?(it ))
= E [ L | i0 ? U, at = ?(it )] .
Similarly, when ? is in the vicinity of 1, ?1r is the minimal asymptotic frequency8 in recurrent
states given that one starts from a random uniform state, since for any policy ? and recurrent
state i:
?
?
1??
lim
x? (i) = lim (1 ? ?)
? t P(it = i | i0 ? U, at = ?(it ))
??1
??1
n
t=0
T ?1
1 ?
P(it = i | i0 ? U, at = ?(it )).
T ?? T
t=0
= lim
With Assumption 1 in hand, we can generalize Lemmas 4-5 as follows.
Lemma
6. If
the
MDP
satisfies
Assumption
1,
after
at
most
#
$
n (m ? 1)?n?t log(n?t )? + ?n?t log(n2 ?t )? iterations either Simplex-PI finishes or a
new recurrent class appears.
7
This can be done by using arguments similar to the proof of Theorem 4 in Section D.
If the MDP is aperiodic and irreducible, and thus admits a stationary distribution ?? for any
policy ?, one can see that
1
=
min
?? (i).
?, i recurrent for ?
?r
8
5
Lemma 7. If the MDP satisfies Assumption 1, when Simplex-PI moves from ? to ? ? where
? ? involves a new recurrent class, we have
3
4
1
T
1 (v?? ? v?? ) ? 1 ?
1T (v?? ? v? ).
?r
From these generalized observations, we can deduce the following original result.
Theorem 7 (Proof in Appendix F of the Supp. Material). If the MDP satisfies Assumption 1, then Simplex-PI terminates after at most
#
$
n2 (m ? 1) (??r log(n?r )? + ??r log(n?t )?) (m ? 1)?n?t log(n?t )? + ?n?t log(n2 ?t )?
iterations.
Remark 1. This new result is a strict generalization of the result for deterministic MDPs.
Indeed, in the deterministic case, we have ?t ? n and ?r ? n, and it is is easy to see that
Lemmas 6, 7 and Theorem 7 respectively imply Lemmas 4, 5 and Theorem 6.
An immediate consequence of the above result is that Simplex-PI is strongly polynomial for
sets of MDPs that are much larger than the deterministic MDPs mentionned in Theorem 6.
Corollary 2. For any family of MDPs indexed by n and m such that ?t and ?r are polynomial functions of n and m, Simplex-PI terminates after a number of steps that is polynomial
in n and m.
4
Similar results for Howard?s PI?
One may then wonder whether similar results can be derived for Howard?s PI. Unfortunately,
and as quickly mentionned by [8], the line of analysis developped for Simplex-PI does not
seem to adapt easily to Howard?s PI, because simultaneously switching several actions can
interfere in a way that the policy improvement turns out to be small. We can be more
precise on what actually breaks in the approach we have described so far. On the one hand,
it is possible to write counterparts of Lemmas 4 and 6 for Howard?s PI (see Appendix G of
the Supp. Material).
Lemma 8. If the MDP is deterministic, after at most n iterations, either Howard?s PI
finishes or a new cycle appears.
Lemma 9. If the MDP satisfies Assumption 1, after at most nm??t log n?t ? iterations,
either Howard?s PI finishes or a new recurrent class appears.
However, on the other hand, we did not manage to adapt Lemma 5 nor Lemma 7. In fact,
it is unlikely that a result similar to that of Lemma 5 will be shown to hold for Howard?s PI.
In a recent deterministic example due to [4] to show that Howard?s PI may require at most
O(n2 ) iterations, new cycles are created every single iteration but the sequence of values
2
satisfies9 for all iterations k < n4 + n4 and states i,
C
3 4k D
2
v? (i) ? v?k+1 (i) ? 1 ?
(v? (i) ? v?k (i)).
n
Contrary to Lemma 5, as k grows, the amount of contraction gets (exponentially) smaller and
smaller. With respect to Simplex-PI, this suggests that Howard?s PI may suffer from subtle
specific pathologies. In fact, the problem of determining the number of iterations required
by Howard?s PI has been challenging for almost 30 years. It was originally identified as
an open problem by [10]. In the simplest?deterministic?case, the question is still open:
the currently best known lower bound is the O(n2 ) bound by [4] we have just mentionned,
n
while the best known upper bound is O( mn ) (valid for all MDPs) due to [6].
9
This MDP has an even number of states n = 2p. The goal is to minimize the long term expected
cost. The optimal value function satisfies v? (i) = ?pN for all i, with N = p2 + p. The policies
generated by Howard?s PI have values v?k (i) ? (pN ?k?1 , pN ?k ). We deduce that for all iterations
k and states i,
v? (i)?v?k+1 (i)
v? (i)?v?k (i)
?
1+p?k?2
1+p?k
=1?
p?k ?p?k?2
1+p?k
6
? 1 ? p?k (1 ? p?2 ) ? 1 ? p?k .
On the positive side, an adaptation of the line of proof we have considered so far can be
carried out under the following assumption.
Assumption 2. The state space X can be partitioned in two sets T and R such that for
all policies ?, the states of T are transient and those of R are recurrent.
Indeed, under this assumption, we can prove for Howard?s PI a variation of Lemma 7
introduced for Simplex-PI.
Lemma 10. For an MDP satisfying Assumptions 1-2, suppose Howard?s PI moves from ?
to ? ? and that ? ? involves a new recurrent class. Then
3
4
1
T
?
1 (v?? ? v? ) ? 1 ?
1T (v?? ? v? ).
?r
And we can deduce the following original bound (that also applies to Simplex-PI).
Theorem 8 (Proof in Appendix H of the Supp. Material). If the MDP satisfies Assumptions 1-2, then Howard?s PI terminates after at most n(m ? 1) (??t log n?t ? + ??r log n?r ?)
iterations, while Simplex-PI terminates after at most n(m ? 1) (?n?t log n?t ? + ??r log n?r ?)
iterations.
It should however be noted that Assumption 2 is rather restrictive. It implies that the algorithms converge on the recurrent states independently of the transient states, and thus the
analysis can be decomposed in two phases: 1) the convergence on recurrent states and then
2) the convergence on transient states (given that recurrent states do not change anymore).
The analysis of the first phase (convergence on recurrent states) is greatly facilitated by the
fact that in this case, a new recurrent class appears every single iteration (this is in contrast
with Lemmas 4, 6, 8 and 9 that were designed to show under which conditions cycles and
recurrent classes are created). Furthermore, the analysis of the second phase (convergence
on transient states) is similar to that of the discounted case of Theorems 3 and 4. In other
words, if this last result sheds some light on the practical efficiency of Howard?s PI and
Simplex-PI, a general analysis of Howard?s PI is still largely open, and constitutes our main
future work.
A
Contraction property for Howard?s PI (Proof of Lemma 2)
For any k, using the facts that {??, T? v? = v? }, {T?? v?k?1 ? T?k v?k?1 } and
{Lemma 1 and P?k is positive definite}, we have
v?? ? v?k = T?? v?? ? T?? v?k?1 + T?? v?k?1 ? T?k v?k?1 + T?k v?k?1 ? T?k v?k
? ?P?? (v?? ? v?k?1 ) + ?P?k (v?k?1 ? v?k ) ? ?P?? (v?? ? v?k?1 ).
Since v?? ? v?k is non negative, we can take the max norm and get: ?v?? ? v?k ?? ?
??v?? ? v?k?1 ?? .
B
Contraction property for Simplex-PI (Proof of Lemma 3)
By using the fact that {v? = T? v? ? v? = (I ? ?P? )?1 r? }, we have that for all pairs of
policies ? and ? ? .
v?? ? v? = (I ? ?P?? )?1 r?? ? v? = (I ? ?P?? )?1 (r?? + ?P?? v? ? v? )
= (I ? ?P?? )?1 (T?? v? ? v? ).
(1)
On the one hand, by using this lemma and the fact that {T?k+1 v?k ? v?k ? 0}, we have for
any k: v?k+1 ? v?k = (I ? ?Pk+1 )?1 (T?k+1 v?k ? v?k ) ? T?k+1 v?k ? v?k , which implies that
1T (v?k+1 ? v?k ) ? 1T (T?k+1 v?k ? v?k ).
(2)
On the other hand, using Equation (1) and the facts that {?(I ? ?P?? )?1 ?? =
1
?1
is positive definite}, {maxs T?k+1 v?k (s) = maxs,?? T?? v?k (s)} and
1?? and (I ? ?P?? )
7
{?x ? 0, maxs x(s) ? 1T x}, we have:
1
max T?? v?k (s) ? v?k (s)
1?? s
1
1
?
max T?k+1 v?k (s) ? v?k (s) ?
1T (T?k+1 v?k ? v?k ),
s
1??
1??
which implies (using {?x, 1T x ? n?x?? }) that
1?? T
1T (T?k+1 v?k ? v?k ) ? (1 ? ?)?v?? ? v?k ?? ?
1 (v?? ? v?k ).
n
Combining Equations (2) and (3), we get:
v?? ? v?k = (I ? ?P?? )?1 (T?? v?k ? v?k ) ?
1T (v?? ? v?k+1 ) = 1T (v?? ? v?k ) ? 1T (v?k+1 ? v?k )
? 1T (v?? ? v?k ) ?
C
1?? T
1 (v?? ? v?k ) =
n
3
1?
1??
n
4
(3)
1T (v?? ? v?k ).
A bound for Howard?s PI when ? < 1 (Proof of Theorem 3)
For any k, by using Equation (1) and the fact {v? ? v?k ? 0 and P?k positive definite}, we
have:
v? ? T?k v? = (I ? ?P?k )(v? ? v?k ) ? v? ? v?k .
Since v? ?T?k v? is non negative, we can take the max norm and, using Lemma 2, Equation (1)
1
and the fact that {?(I ? ?P?0 )?1 ?? = 1??
}, we get:
?v? ? T?k v? ?? ? ?v? ? v?k ?? ? ? k ?v?? ? v?0 ??
?k
?v? ? T?0 v? ?? .
(4)
1??
By definition of the max-norm, there exists a state s0 such that v? (s0 ) ? [T?0 v? ](s0 ) =
?v? ? T?0 v? ?? . From Equation (4), we deduce that for all k,
= ? k ?(I ? ?P?0 )?1 (v? ? T?0 v? )?? ?
v? (s0 ) ? [T?k v? ](s0 ) ? ?v? ? T?k v? ?? ?
?k
?k
?v? ? T?0 v? ?? =
(v? (s0 ) ? [T?0 v? ](s0 )).
1??
1??
?
As a consequence, the action ?k (s0 ) must be different from ?0 (s0 ) when 1??
< 1, that is for
? log 1 ? ? log 1 ?
1??
all values of k satisfying k ? k ? =
> log1??
. In other words, if some policy ?
1
1??
k
?
is not optimal, then one of its non-optimal actions will be eliminated for good after at most
k ? iterations. By repeating this argument, one can eliminate all non-optimal actions (they
are at most n(m ? 1)), and the result follows.
D
A bound for Simplex-PI when ? < 1 (Proof of Theorem 4)
Using {?x ? 0, ?x?? ? 1T x}, Lemma 3, {?x, 1T x ? n?x?? }, Equation (1) and {?(I ?
1
?P?0 )?1 ?? = 1??
}, we have for all k,
?v?? ? T?k v?? ?? ? ?v?? ? v?k ?? ? 1T (v?? ? v?k )
3
4k
3
4k
1??
1??
? 1?
1T (v?? ? v?0 ) ? n 1 ?
?v?? ? v?0 ??
n
n
3
4k
3
4k
1??
n
1??
?1
=n 1?
?(I ? ?P?0 ) (v? ? T?0 v? )?? ?
1?
?v?? ? T?0 v?? ??
n
1??
n
Similarly to the proof for Howard?s PI,
we deduce that a non-optimal action is eliminated
?
? 9 log n :
n
n
after at most k ? = 1??
log 1??
? log 1?1??
, and the overall number of iterations is
( 1??
n )
obtained by noting that there are at most n(m ? 1) non optimal actions to eliminate.
8
References
[1] D.P. Bertsekas and J.N. Tsitsiklis. Neurodynamic Programming. Athena Scientific,
1996.
[2] J. Fearnley. Exponential lower bounds for policy iteration. In Proceedings of the 37th
international colloquium conference on Automata, languages and programming: Part
II, ICALP?10, pages 551?562, Berlin, Heidelberg, 2010. Springer-Verlag.
[3] T.D. Hansen, P.B. Miltersen, and U. Zwick. Strategy iteration is strongly polynomial
for 2-player turn-based stochastic games with a constant discount factor. J. ACM,
60(1):1:1?1:16, February 2013.
[4] T.D. Hansen and U. Zwick. Lower bounds for howard?s algorithm for finding minimum
mean-cost cycles. In ISAAC (1), pages 415?426, 2010.
[5] R. Hollanders, J.C. Delvenne, and R. Jungers. The complexity of policy iteration is
exponential for discounted markov decision processes. In 51st IEEE conference on
Decision and control (CDC?12), 2012.
[6] Y. Mansour and S.P. Singh. On the complexity of policy iteration. In UAI, pages
401?408, 1999.
[7] M. Melekopoglou and A. Condon. On the complexity of the policy improvement algorithm for markov decision processes. INFORMS Journal on Computing, 6(2):188?192,
1994.
[8] I. Post and Y. Ye. The simplex method is strongly polynomial for deterministic markov
decision processes. Technical report, arXiv:1208.5083v2, 2012.
[9] M. Puterman. Markov Decision Processes. Wiley, New York, 1994.
[10] N. Schmitz. How good is howard?s policy improvement algorithm?
Operations Research, 29(7):315?316, 1985.
Zeitschrift f?
ur
[11] Y. Ye. The simplex and policy-iteration methods are strongly polynomial for the markov
decision problem with a fixed discount rate. Math. Oper. Res., 36(4):593?603, 2011.
9
| 4971 |@word version:1 polynomial:8 norm:4 open:3 condon:1 contraction:6 incurs:1 lorraine:1 existing:1 must:1 remove:1 designed:2 update:1 stationary:4 greedy:4 xk:3 core:1 provides:1 math:1 along:2 direct:2 ik:6 prove:1 indeed:4 expected:3 nor:1 bellman:2 discounted:5 decomposed:1 increasing:1 provided:1 notation:3 suffice:1 maximizes:1 what:1 finding:1 every:3 shed:1 universit:1 control:7 bertsekas:1 positive:6 tends:1 consequence:3 switching:1 zeitschrift:1 ak:3 path:3 abuse:1 inria:2 umr:1 suggests:1 challenging:1 practical:1 developped:1 practice:1 definite:3 procedure:1 word:4 get:5 operator:3 context:1 impossible:1 equivalent:1 deterministic:15 map:1 attention:1 starting:2 independently:1 automaton:1 m2:4 rule:1 miltersen:1 notion:1 variation:6 resp:2 suppose:1 programming:3 us:1 element:1 satisfying:4 cycle:11 highest:1 yk:6 colloquium:1 complexity:6 reward:3 dynamic:1 depend:2 weakly:1 singh:1 max1:1 efficiency:1 completely:1 easily:1 describe:2 refined:1 whose:3 widely:1 supplementary:1 larger:2 say:1 hollander:1 advantage:5 sequence:7 maximal:4 fr:1 adaptation:1 combining:1 neurodynamic:1 convergence:5 empty:3 converges:2 leave:2 derive:1 recurrent:19 informs:1 progress:2 p2:1 involves:3 implies:4 restate:1 aperiodic:1 stochastic:3 transient:12 material:4 require:3 argued:1 generalization:1 extension:1 hold:2 considered:1 smallest:1 currently:1 hansen:2 schmitz:1 aim:1 rather:1 pn:3 zwick:2 corollary:2 derived:1 focus:1 improvement:4 greatly:1 contrast:1 i0:6 unlikely:2 eliminate:2 going:1 france:2 arg:2 scherrer:2 overall:2 breakthrough:1 equal:2 never:1 eliminated:2 look:1 constitutes:1 future:1 simplex:42 report:1 few:1 irreducible:1 simultaneously:1 comprehensive:1 phase:3 n1:1 deferred:1 we2:1 light:1 tuple:1 mentionned:4 indexed:1 re:1 villers:1 minimal:1 instance:2 cost:2 subset:3 uniform:3 wonder:1 dependency:5 st:1 fundamental:1 international:1 quickly:1 central:1 nm:7 manage:1 containing:1 oper:1 supp:3 de:1 singleton:1 coefficient:3 explicitly:1 depends:1 break:1 start:1 minimize:1 who:1 largely:1 generalize:2 critically:1 trajectory:1 explain:1 definition:1 involved:1 isaac:1 associated:1 proof:24 proved:1 nancy:2 size1:1 knowledge:1 lim:5 subtle:1 actually:1 appears:6 originally:1 improved:3 formulation:1 done:2 though:3 strongly:5 furthermore:1 just:1 hand:5 nonlinear:1 interfere:1 scientific:1 grows:1 mdp:21 ye:2 managed:1 counterpart:1 vicinity:2 puterman:1 game:1 noted:1 criterion:1 generalized:2 instantaneous:1 recently:3 exponentially:1 extend:1 slight:1 significant:2 trivially:1 similarly:2 bruno:2 pathology:1 sherman:1 language:1 deduce:5 loria:1 own:1 recent:3 showed:1 belongs:2 verlag:1 arbitrarily:1 seen:4 minimum:1 additional:1 converge:2 ii:1 technical:2 adapt:2 long:1 post:1 n5:1 arxiv:1 iteration:48 achieved:1 extra:2 strict:1 induced:2 regularly:2 contrary:1 seem:4 integer:1 call:3 structural:3 noting:1 easy:2 switch:7 finish:4 identified:1 pi3:1 pivot:1 whether:1 suffer:1 york:1 action:14 remark:1 amount:2 repeating:1 discount:8 category:1 simplest:1 generate:1 specifies:1 exist:1 revisit:2 per:3 discrete:1 write:3 visitation:1 key:1 four:1 clarity:1 sum:1 year:1 facilitated:1 family:1 almost:1 decision:8 appendix:5 bound:22 n3:2 generates:2 argument:3 optimality:1 min:1 conjecture:1 terminates:15 smaller:3 slightly:1 describes:1 ur:1 partitioned:2 n4:3 equation:8 discus:2 turn:2 needed:4 generalizes:1 operation:1 v2:1 anymore:1 altogether:1 original:3 denotes:2 running:1 log2:1 exploit:1 restrictive:2 february:1 implied:1 move:3 question:1 quantity:1 occurs:1 strategy:1 berlin:1 athena:1 topic:1 equivalently:1 unfortunately:1 sharper:1 negative:3 stated:1 policy:47 upper:5 observation:4 markov:7 howard:45 finite:2 immediate:2 precise:1 rn:1 mansour:1 introduced:1 pair:1 required:3 componentwise:1 program:1 built:2 max:18 mn:3 improve:1 mdps:10 imply:1 created:3 carried:1 log1:1 review:1 literature:2 determining:1 asymptotic:1 contracting:3 icalp:1 cdc:1 by1:1 limitation:1 vandoeuvre:1 foundation:1 affine:1 pij:3 sufficient:1 s0:9 switchable:1 pi:89 last:1 soon:1 tsitsiklis:1 side:1 transition:3 valid:1 made:2 vmax:5 far:2 uai:1 iterative:1 why:1 terminate:1 improving:2 heidelberg:1 did:1 pk:1 main:1 n2:9 wiley:1 precision:1 sub:1 exponential:4 theorem:16 formula:1 removing:2 specific:2 maxi:1 admits:1 exists:3 restricting:1 applies:1 springer:1 corresponds:1 satisfies:9 relies:2 acm:1 goal:1 careful:1 replace:1 change:4 hard:1 specifically:1 lemma:29 total:2 called:3 e:2 player:1 |
4,388 | 4,972 | Efficient Exploration and Value Function
Generalization in Deterministic Systems
Zheng Wen
Stanford University
[email protected]
Benjamin Van Roy
Stanford University
[email protected]
Abstract
We consider the problem of reinforcement learning over episodes of a finitehorizon deterministic system and as a solution propose optimistic constraint propagation (OCP), an algorithm designed to synthesize efficient exploration and
value function generalization. We establish that when the true value function Q?
lies within the hypothesis class Q, OCP selects optimal actions over all but at most
dimE [Q] episodes, where dimE denotes the eluder dimension. We establish further efficiency and asymptotic performance guarantees that apply even if Q? does
not lie in Q, for the special case where Q is the span of pre-specified indicator
functions over disjoint sets.
1
Introduction
A growing body of work on efficient reinforcement learning provides algorithms with guarantees
on sample and computational efficiency [13, 6, 2, 22, 4, 9]. This literature highlights the point that
an effective exploration scheme is critical to the design of any efficient reinforcement learning algorithm. In particular, popular exploration schemes such as ?-greedy, Boltzmann, and knowledge
gradient can require learning times that grow exponentially in the number of states and/or the planning horizon.
The aforementioned literature focuses on tabula rasa learning; that is, algorithms aim to learn with
little or no prior knowledge about transition probabilities and rewards. Such algorithms require
learning times that grow at least linearly with the number of states. Despite the valuable insights
that have been generated through their design and analysis, these algorithms are of limited practical
import because state spaces in most contexts of practical interest are enormous. There is a need for
algorithms that generalize from past experience in order to learn how to make effective decisions in
reasonable time.
There has been much work on reinforcement learning algorithms that generalize (see, e.g.,
[5, 23, 24, 18] and references therein). Most of these algorithms do not come with statistical or
computational efficiency guarantees, though there are a few noteworthy exceptions, which we now
discuss. A number of results treat policy-based algorithms (see [10, 3] and references therein), in
which the goal is to select high-performers among a pre-specified collection of policies as learning progresses. Though interesting results have been produced in this line of work, each entails
quite restrictive assumptions or does not make strong guarantees. Another body of work focuses
on model-based algorithms. An algorithm is proposed in [12] that fits a factored model to observed
data and makes decisions based on the fitted model. The authors establish a sample complexity
bound that is polynomial in the number of model parameters rather than the number of states, but
the algorithm is computationally intractable because of the difficulty of solving factored MDPs. A
recent paper [14] proposes a novel algorithm for the case where the true environment is known to
belong to a finite or compact class of models, and shows that its sample complexity is polynomial
in the cardinality of the model class if the model class is finite, or the ?-covering-number if the
1
model class is compact. Though this result is theoretically interesting, for most model classes of
interest, the ?-covering-number is enormous since it typically grows exponentially in the number of
free parameters. Another recent paper [17] establishes a regret bound for an algorithm that applies
to problems with continuous state spaces and H?older-continuous rewards and transition kernels.
Though the results represent an interesting contribution to the literature, a couple features of the
regret bound weaken its practical implications. First, regret grows linearly with the H?older constant
of the transition kernel, which for most contexts of practical relevance grows exponentially in the
number of state variables. Second, the dependence on time becomes arbitrarily close to linear as the
dimension of the state space grows. Reinforcement learning in linear systems with quadratic cost
is treated in [1]. The method proposed is shown to realize regret that grows with the square root
of time. The result is interesting and the property is desirable, but to the best of our knowledge,
expressions derived for regret in the analysis exhibit an exponential dependence on the number of
state variables, and further, we are not aware of a computationally efficient way of implementing the
proposed method. This work was extended by [8] to address linear systems with sparse structure.
Here, there are efficiency guarantees that scale gracefully with the number of state variables, but
only under sparsity and other technical assumptions.
The most popular approach to generalization in the applied reinforcement learning literature involves
fitting parameterized value functions. Such approaches relate closely to supervised learning in that
they learn functions from state to value, though a difference is that value is influenced by action
and observed only through delayed feedback. One advantage over model learning approaches is
that, given a fitted value function, decisions can be made without solving a potentially intractable
control problem. We see this as a promising direction, though there currently is a lack of theoretical
results that provide attractive bounds on learning time with value function generalization. A relevant
paper along this research line is [15], which studies the efficient reinforcement learning with value
function generalization in the KWIK framework (see [16]), and reduces the efficient reinforcement
learning problem to the efficient KWIK online regression problem. However, the authors do not
show how to solve the general KWIK online regression problem efficiently, and it is not even clear
whether this is possible. Thus, though the result of [15] is interesting, it does not provide a provably
efficient algorithm.
An important challenge that remains is to couple exploration and value function generalization in
a provably effective way, and in particular, to establish sample and computational efficiency guarantees that scale gracefully with the planning horizon and model complexity. In this paper, we aim
to make progress in this direction. To start with a simple context, we restrict our attention to deterministic systems that evolve over finite time horizons, and we consider episodic learning, in which
an agent repeatedly interacts with the same system. As a solution to the problem, we propose optimistic constraint propagation (OCP), a computationally efficient reinforcement learning algorithm
designed to synthesize efficient exploration and value function generalization. We establish that
when the true value function Q? lies within the hypothesis class Q, OCP selects optimal actions
over all but at most dimE [Q] episodes. Here, dimE denotes the eluder dimension, which quantifies
complexity of the hypothesis class. A corollary of this result is that regret is bounded by a function
that is constant over time and linear in the problem horizon and eluder dimension.
To put our aforementioned result in perspective, it is useful to relate it to other lines of work.
Consider first the broad area of reinforcement learning algorithms that fit value functions, such
as SARSA [19]. Even with the most commonly used sort of hypothesis class Q, which is made
up of linear combinations of fixed basis functions, and even when the hypothesis class contains the
true value function Q? , there are no guarantees that these algorithms will efficiently learn to make
near-optimal decisions. On the other hand, our result implies that OCP attains near-optimal performance in time that scales linearly with the number of basis functions. Now consider the more
specialized context of a deterministic linear system with quadratic cost and a finite time horizon.
The analysis of [1] can be leveraged to produce regret bounds that scale exponentially in the number
of state variables. On the other hand, using a hypothesis space Q consisting of quadratic functions
of state-action pairs, the results of this paper show that OCP behaves near optimally within time that
scales quadratically in the number of state and action variables.
We also establish efficiency and asymptotic performance guarantees that apply to agnostic reinforcement learning, where Q? does not necessarily lie in Q. In particular, we consider the case where Q
is the span of pre-specified indicator functions over disjoint sets. Our results here add to the literature on agnostic reinforcement learning with such a hypothesis class [21, 25, 7, 26]. Prior work in
2
this area has produced interesting algorithms and insights, as well as bounds on performance loss
associated with potential limits of convergence, but no convergence or efficiency guarantees.
2
Reinforcement Learning in Deterministic Systems
In this paper, we consider an episodic reinforcement learning (RL) problem in which an agent repeatedly interacts with a discrete-time finite-horizon deterministic system, and refer to each interaction
as an episode. The system is identified by a sextuple M = (S, A, H, F, R, S), where S is the state
space, A is the action space, H is the horizon, F is a system function, R is a reward function and
S is a sequence of states. If action a 2 A is selected while the system is in state x 2 S at period
t = 0, 1, ? ? ? , H 1, a reward of Rt (x, a) is realized; furthermore, if t < H 1, the state transitions
to Ft (x, a). Each episode terminates at period H 1, and then a new episode begins. The initial
state of episode j is the jth element of S.
To represent the history of actions and observations over multiple episodes, we will often index
variables by both episode and period. For example, xj,t and aj,t denote the state and action at
period t of episode j, where j = 0, 1, ? ? ? and t = 0, 1, ? ? ? , H 1. To count the total number of
steps since the agent started learning, we say period t in episode j is time jH + t.
A (deterministic) policy ? = (?0 , . . . , ?H 1 ) is a sequence of functions, each mapping S to A.
PH 1
For each policy ?, define a value function Vt? (x) = ? =t R? (x? , a? ), where xt = x, x? +1 =
F? (x? , a? ), and a? = ?? (x? ). The optimal value function is defined by Vt? (x) = sup? Vt? (x). A
?
policy ?? is said to be optimal if V ? = V ? . Throughout this paper, we will restrict attention to
systems M = (S, A, H, F, R, S) that admit optimal policies. Note that this restriction incurs no
loss of generality when the action space is finite.
It is also useful to define an action-contingent optimal value function: Q?t (x, a) = Rt (x, a) +
?
Vt+1
(Ft (x, a)) for t < H 1, and Q?H 1 (x, a) = RH 1 (x, a). Then, a policy ?? is optimal if
?
?t (x) 2 arg maxa2A Q?t (x, a) for all (x, t).
A reinforcement learning algorithm generates each action aj,t based on observations made up to the
tth period of the jth episode, including all states, actions, and rewards observed in previous episodes
and earlier in the current episode, as well as the state space S, action space A, horizon H, and possiPH 1
ble prior information. In each episode, the algorithm realizes reward R(j) = t=0 Rt (xj,t , aj,t ).
Note that R(j) ? V0? (xj,0 ) for each jth episode. One way to quantify performance of a reinforcement learning algorithm is in terms of the number of episodes JL for which R(j) < V0? (xj,0 ) ?,
where ?
0 is a pre-specified performance loss threshold. If the reward function R is bounded,
with |Rt (x, a)| ? R for all (x, a, t), then this also implies a bound on regret over episodes exPbT /Hc 1 ?
perienced prior to time T , defined by Regret(T ) =
(V0 (xj,0 ) R(j) ). In particular,
j=0
Regret(T ) ? 2RHJL + ?bT /Hc.
3
Optimistic Constraint Propagation
At a high level, our reinforcement learning algorithm ? optimistic constraint propagation (OCP) ?
selects actions based on the optimism in the face of uncertainty principle and based on observed
rewards and state transitions propagates constraints backwards through time. Specifically, it takes
as input the state space S, the action space A, the horizon H, and a hypothesis class Q of candidates for Q? . The algorithm maintains a sequence of subsets of Q and a sequence of scalar ?upper
bounds?, which summarize constraints that past experience suggests for ruling out hypotheses. Each
constraint in this sequence is specified by a state x 2 S, an action a 2 A, a period t = 0, . . . , H 1,
and an interval [L, U ] ? <, and takes the form {Q 2 Q : L ? Qt (x, a) ? U }. The upper bound
of the constraint is U . Given a sequence C = (C1 , . . . , C|C| ) of such constraints and upper bounds
U = (U1 , . . . , U|C| ), a set QC is defined constructively by Algorithm 1. Note that if the constraints
do not conflict then QC = C1 \ ? ? ? \ C|C| . When constraints do conflict, priority is assigned first
based on upper bound, with smaller upper bound preferred, and then, in the event of ties in upper
bound, based on position in the sequence, with more recent experience preferred.
3
Algorithm 1 Constraint Selection
Require: Q, C
QC
Q, u
min U
while u ? 1 do
for ? = |C| to 1 do
if U? = u and QC \ C? 6= ? then
QC
QC \ C ?
end if
end for
if {u0 2 U : u0 > u} = ? then
return QC
end if
u
min{u0 2 U : u0 > u}
end while
OCP, presented below as Algorithm 2, at each time t computes for the current state xj,t and each
action a the greatest state-action value Qt (xj,t , a) among functions in QC and selects an action that
attains the maximum. In other words, an action is chosen based on the most optimistic feasible outcome subject to constraints. The subsequent reward and state transition give rise to a new constraint
that is used to update C. Note that the update of C is postponed until one episode is completed.
Algorithm 2 Optimistic Constraint Propagation
Require: S, A, H, Q
Initialize C
?
for episode j = 0, 1, ? ? ? do
Set C 0
C
for period t = 0, 1, ? ? ? , H 1 do
Apply aj,t 2 arg maxa2A supQ2QC Qt (xj,t , a)
if t < H 1 then
Uj,t
supQ2QC (Rt (xj,t , aj,t ) + supa2A Qt+1 (xj,t+1 , a))
Lj,t
inf Q2QC (Rt (xj,t , aj,t ) + supa2A Qt+1 (xj,t+1 , a))
else
Uj,t
Rt (xj,t , aj,t )
Lj,t
Rt (xj,t , aj,t )
end if
C0
C 0 _ {Q 2 Q : Lj,t ? Qt (xj,t , aj,t ) ? Uj,t }
end for
Update C
C0
end for
Note that if Q? 2 Q then each constraint appended to C does not rule out Q? , and therefore, the
sequence of sets QC generated as the algorithm progresses is decreasing and contains Q? in its
intersection. In the agnostic case, where Q? may not lie in Q, new constraints can be inconsistent
with previous constraints, in which case selected previous constraints are relaxed as determined by
Algorithm 1.
Let us briefly discuss several contexts of practical relevance and/or theoretical interest in which OCP
can be applied.
? Finite state/action tabula rasa case. With finite state and action spaces, Q? can be represented as a vector, and without special prior knowledge, it is natural to let Q = <|S|?|A|?H .
? Polytopic prior constraints. Consider the aforementioned example, but suppose that we
have prior knowledge that Q? lies in a particular polytope. Then we can let Q be that
polytope and again apply OCP.
? Linear systems with quadratic cost (LQ). In this classical control model, if S = <n ,
A = <m , and R is a positive semidefinite quadratic, then for each t, Q?t is known to be a
4
positive semidefinite quadratic, and it is natural to let Q = QH
0 with Q0 denoting the set of
positive semidefinite quadratics.
? Finite hypothesis class. Consider a context when we have prior knowledge that Q? can
be well approximated by some element in a finite hypothesis class. Then we can let Q be
that finite hypothesis class and apply OCP. This scenario is of particular interest from the
perspective of learning theory. Note that this context entails agnostic learning, which is
accommodated by OCP.
? Linear combination of features. It is often effective to hand-select a set of features
1 , . . . , K , each mapping S ? A to <, and, then for each t, aiming to compute weights
P (t)
?(t) 2 <K so that k ?k k approximates Q?t without knowing for sure that Q?t lies
in the span of the features. To apply OCP here, we would let Q = QH
0 with Q0 =
span( 1 , . . . , K ). Note that this context also entails agnostic learning.
? Sigmoid. If it is known that rewards are only received upon transitioning to the terminal
state and take values between 0 and 1, it might be appropriate to use a variation of the
aforementioned feature based model that applies a sigmoidal function
P to the linear combination. In particular, we could have Q = QH
( k ?k k (?)) : ? 2 <K ,
0 with Q0 =
where (z) = ez /(1 + ez ).
It is worth mentioning that OCP, as we have defined it, assumes that an action a maximizing
supQ2QC Qt (xj,t , a) exists in each iteration. It is not difficult to modify the algorithm so that it
addresses cases where this is not true. But we have not presented the more general form of OCP in
order to avoid complicating this short paper.
4
Sample Efficiency of Optimistic Constraint Propagation
We now establish results concerning the sample efficiency of OCP. Our results bound the time it
takes OCP to learn, and this must depend on the complexity of the hypothesis class. As such, we
begin by defining the eluder dimension, as introduced in [20], which is the notion of complexity we
will use.
4.1
Eluder Dimension
Let Z = {(x, a, t) : x 2 S, a 2 A, t = 0, . . . , H 1} be the set of all state-action-period triples,
and let Q denote a nonempty set of functions mapping Z to <. For all (x, a, t) 2 Z and Z? ? Z,
? 2 Q that are
(x, a, t) is said to be dependent on Z? with respect to Q if any pair of functions Q, Q
equal on Z? are equal at (x, a, t). Further, (x, a, t) is said to be independent of Z? with respect to Q
if (x, a, t) is not dependent on Z? with respect to Q.
The eluder dimension dimE [Q] of Q is the length of the longest sequence of elements in Z such that
every element is independent of its predecessors. Note that dimE [Q] can be zero or infinity, and it
is straightforward to show that if Q1 ? Q2 then dimE [Q1 ] ? dimE [Q2 ]. Based on results of [20],
we can characterize the eluder dimensions of various hypothesis classes presented in the previous
section.
? Finite state/action tabula rasa case. If Q = <|S|?|A|?H , then dimE [Q] = |S| ? |A| ? H.
? Polytopic prior constraints. If Q is a polytope of dimension d in <|S|?|A|?H , then
dimE [Q] = d.
? Linear systems with quadratic cost (LQ). If Q0 is the set of positive semidefinite quadratics with domain <m+n and Q = QH
0 , then dimE [Q] = (m + n + 1)(m + n)H/2.
? Finite hypothesis space. If |Q| < 1, then dimE [Q] ? |Q|
1.
? Linear combination of features. If Q =
with Q0 = span( 1 , . . . , K ), then
dimE [Q] ? KH.
P
? Sigmoid. If Q = QH
( k ?k k (?)) : ? 2 <K , then dimE [Q] ? KH.
0 with Q0 =
QH
0
5
4.2
Learning with a Coherent Hypothesis Class
We now present results that apply when OCP is presented with a coherent hypothesis class; that is,
where Q? 2 Q. Our first result establishes that OCP can deliver less than optimal performance in
no more than dimE [Q] episodes.
Theorem 1 For any system M = (S, A, H, F, R, S), if OCP is applied with Q? 2 Q, then |{j :
R(j) < V0? (xj,0 )}| ? dimE [Q].
This theorem follows from an ?exploration-exploitation lemma?, which asserts that in each episode,
OCP either delivers optimal reward (exploitation) or introduces a constraint that reduces the eluder
dimension of the hypothesis class by one (exploration). Consequently, OCP will experience suboptimal performance in at most dimE [Q] episodes. A complete proof is provided in the appendix.
An immediate corollary bounds regret.
Corollary 1 For any R, any system M = (S, A, H, F, R, S) with sup(x,a,t) |Rt (x, a)| ? R, and
any T , if OCP is applied with Q? 2 Q, then Regret(T ) ? 2RHdimE [Q].
Note the regret bound in Corollary 1 does not depend on time T , thus, it is an O (1) bound. Furthermore, this regret bound is linear in R, H and dimE [Q], and does not directly depend on |S| or |A|.
The following results demonstrate that the bounds of the above theorem and corollary are sharp.
Theorem 2 For any reinforcement learning algorithm that takes as input a state space, an action
space, a horizon, and a hypothesis class, there exists a system M = (S, A, H, F, R, S) and a
hypothesis class Q 3 Q? such that |{j : R(j) < V0? (xj,0 )}| dimE [Q].
Theorem 3 For any R
0 and any reinforcement learning algorithm that takes as input a
state space, an action space, a horizon, and a hypothesis class, there exists a system M =
(S, A, H, F, R, S) with sup(x,a,t) |Rt (x, a)| ? R and a hypothesis class Q 3 Q? such that
supT Regret(T ) 2RHdimE [Q].
A constructive proof of these lower bounds is provided in the appendix. Following our discussion
in previous sections, we discuss several interesting contexts in which the agent knows a coherent
hypothesis class Q with finite eluder dimension.
? Finite state/action tabula rasa case. If we apply OCP in this case, then it will deliver suboptimal performance in at most |S|?|A|?H episodes. Furthermore, if sup(x,a,t) |Rt (x, a)| ?
R, then for any T , Regret(T ) ? 2R|S||A|H 2 .
? Polytopic prior constraints. If we apply OCP in this case, then it will deliver sub-optimal
performance in at most d episodes. Furthermore, if sup(x,a,t) |Rt (x, a)| ? R, then for any
T , Regret(T ) ? 2RHd.
? Linear systems with quadratic cost (LQ). If we apply OCP in this case, then it will deliver
sub-optimal performance in at most (m + n + 1)(m + n)H/2 episodes.
? Finite hypothesis class case. Assume that the agent has prior knowledge that Q? 2 Q,
where Q is a finite hypothesis class. If we apply OCP in this case, then it will deliver suboptimal performance in at most |Q| 1 episodes. Furthermore, if sup(x,a,t) |Rt (x, a)| ? R,
then for any T , Regret(T ) ? 2RH [|Q| 1].
4.3
Agnostic Learning
As we have discussed in Section 3, OCP can also be applied in agnostic learning cases, where Q?
may not lie in Q. For such cases, the performance of OCP should depend on not only the complexity
of Q, but also the distance between Q and Q? . We now present results when OCP is applied in a
special agnostic learning case, where Q is the span of pre-specified indicator functions over disjoint
subsets. We henceforth refer to this case as the state aggregation case.
6
Specifically, we assume that for any t = 0, 1, ? ? ? , H 1, the state-action space at period t, Zt =
{(x, a, t) : x 2 S, a 2 A}, can be partitioned into Kt disjoint subsets Zt,1 , Zt,2 , ? ? ? , Zt,Kt , and
use t,k to denote the indicator function for partition Zt,k (i.e. t,k (x, a, t) = 1 if (x, a, t) 2 Zt,k ,
PH 1
and t,k (x, a, t) = 0 otherwise). We define K = t=0 Kt , and Q as
Q = span
0,1 ,
0,2 , ? ? ?
,
0,K0 ,
1,1 , ? ? ?
,
H 1,KH
1
.
Note that dimE [Q] = K. We define the distance between Q and the hypothesis class Q as
? = min kQ
Q2Q
(4.1)
?
Q? k1 = min sup |Qt (x, a)
Q2Q (x,a,t)
Q?t (x, a)|.
(4.2)
The following result establishes that with Q and ? defined above, the performance loss of OCP is
larger than 2?H(H + 1) in at most K episodes.
Theorem 4 For any system M = (S, A, H, F, R, S), if OCP is applied with Q defined in Eqn(4.1),
then
|{j : R(j) < V0? (xj,0 ) 2?H(H + 1)}| ? K,
where K is the number of partitions and ? is defined in Eqn(4.2).
Similar to Theorem 1, this theorem also follows from an ?exploration-exploitation lemma?, which
asserts that in each episode, OCP either delivers near-optimal reward (exploitation), or approximately determines Q?t (x, a)?s for all the (x, a, t)?s in a disjoint subset (exploration). A complete
proof for Theorem 4 is provided in the appendix. An immediate corollary bounds regret.
Corollary 2 For any R 0, any system M = (S, A, H, F, R, S) with sup(x,a,t) |Rt (x, a)| ? R,
and any time T , if OCP is applied with Q defined in Eqn(4.1), then Regret(T ) ? 2RKH + 2?(H +
1)T , where K is the number of partitions and ? is defined in Eqn(4.2).
Note that the regret bound in Corollary 2 is O (T ), and the coefficient of the linear term is 2?(H +1).
Consequently, if Q? is close to Q, then the regret will increase slowly with T . Furthermore, the
regret bound in Corollary 2 does not directly depend on |S| or |A|.
We further notice that the threshold performance loss in Theorem 4 is O ?H 2 . The following
proposition provides a condition under which the performance loss in one episode is O (?H).
Proposition 1 For any episode j, if 8t = 0, 1, ? ? ? , H
then we have V0? (xj,0 )
1,
QC ? {Q 2 Q : Lj,t ? Qt (xj,t , aj,t ) ? Uj,t } ,
R(j) ? 6?H = O (?H).
That is, if all the new constraints in an episode are redundant, then the performance loss in that
episode is O (?H). Note that if the condition for Proposition 1 holds in an episode, then QC will
not be modified at the end of that episode. Furthermore, if the system has a fixed initial state and
the condition for Proposition 1 holds in one episode, then it will hold in all the subsequent episodes,
and consequently, the performance losses in all the subsequent episodes are O (?H).
5
Computational Efficiency of Optimistic Constraint Propagation
We now briefly discuss the computational complexity of OCP. As typical in the complexity analysis
of optimization algorithms, we assume that basic operations include the arithmetic operations, comparisons, and assignment, and measure computational complexity in terms of the number of basic
operations (henceforth referred to as operations) per period.
First, it is worth pointing out that for a general hypothesis class Q and general action space A, the
per period computations of OCP can intractable. This is because:
? Computing supQ2QC Qt (xj,t , a), Uj,t and Lj,t requires solving a possibly intractable optimization problems.
7
? Selecting an action that maximizes supQ2QC Qt (xj,t , a) can be intractable.
Further, the number of constraints in C, and with it the number of operations per period, can grow
over time.
However, if |A| is tractably small and Q has some special structures (e.g. Q is a finite set or a
linear subspace or, more generally a polytope), then by discarding some ?redundant? constraints in
C, OCP with a variant of Algorithm 1 will be computationally efficient, and the sample efficiency
results developed in Section 4 will still hold. Due to space limitations, we only discuss the scenario
where Q is a polytope of dimension d. Note that the finite state/action tabula rasa case, the linearquadratic case, and the case with linear combinations of disjoint indicator functions are all special
cases of this scenario.
Specifically, if Q is a polytope of dimension d (i.e., within a d-dimensional subspace), then any
Q 2 Q can be represented by a weight vector ? 2 <d , and Q can be characterized by a set of linear
inequalities of ?. Furthermore, the new constraints of the form Lj,t ? Qt (xj,t , aj,t ) ? Uj,t are also
linear inequalities of ?. Hence, in each episode, QC is characterized by a polyhedron in <d , and
supQ2QC Qt (xj,t , a), Uj,t and Lj,t can be computed by solving linear programming (LP) problems.
If we assume that all the encountered numerical values can be represented with B bits, and LPs
are solved by Karmarkar?s algorithm [11], then the following proposition bounds the computational
complexity.
Proposition 2 If Q is a polytope of dimension d, each numerical value in the problem data
or observed in the course of learning can be represented with B bits, and OCP uses Karmarkar?s algorithm to solve linear programs, then the computational complexity of OCP is
O [|A| + |C|] |C|d4.5 B operations per period.
The proof of Proposition 2 is provided in the appendix. Notice that the computational complexity
is polynomial in d, B, |C| and |A|, and thus, OCP will be computationally efficient if all these
parameters are tractably small. Note that the bound in Proposition 2 is a worst-case bound, and
the O(d4.5 ) term is incurred by the need to solve LPs. For some special cases, the computational
complexity is much less. For instance, in the state aggregation case, the computational complexity
is O (|C| + |A| + d) operations per period.
As we have discussed above, one can ensure that |C| remains bounded by using variants of Algorithm 1 that discard the redundant constraints and/or update QC more efficiently. Specifically, it is
straightforward to design such constraint selection algorithms if Q is a coherent hypothesis class, or
if Q is the span of pre-specified indicator functions over disjoint sets. Furthermore, if the notion of
redundant constraints is properly defined, the sample efficiency results derived in Section 4 will still
hold.
6
Conclusion
We have proposed a novel reinforcement learning algorithm, called optimistic constraint propagation
(OCP), that synthesizes efficient exploration and value function generalization for reinforcement
learning in deterministic systems. We have shown that OCP is sample efficient if Q? lies in the given
hypothesis class, or if the given hypothesis class is the span of pre-specified indicator functions over
disjoint sets.
It is worth pointing out that for more general reinforcement learning problems, how to design provably sample efficient algorithms with value function generalization is currently still open. For instance, it is not clear how to establish such algorithms for the general agnostic learning case discussed in this paper, as well as for reinforcement learning in MDPs. One interesting direction for
future research is to extend OCP, or a variant of it, to these two problems.
References
[1] Yasin Abbasi-Yadkori and Csaba Szepesv?ari. Regret bounds for the adaptive control of linear
quadratic systems. Journal of Machine Learning Research - Proceedings Track, 19:1?26, 2011.
8
[2] Peter Auer and Ronald Ortner. Logarithmic online regret bounds for undiscounted reinforcement learning. In NIPS, pages 49?56, 2006.
[3] Mohammad Gheshlaghi Azar, Alessandro Lazaric, and Emma Brunskill. Regret bounds for
reinforcement learning with policy advice. CoRR, abs/1305.1027, 2013.
[4] Peter L. Bartlett and Ambuj Tewari. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In Proceedings of the 25th Conference on
Uncertainty in Artificial Intelligence (UAI2009), pages 35?42, June 2009.
[5] Dimitri P. Bertsekas and John Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
September 1996.
[6] Ronen I. Brafman and Moshe Tennenholtz. R-max - a general polynomial time algorithm
for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213?231,
2002.
[7] Geoffrey Gordon. Online fitted reinforcement learning. In Advances in Neural Information
Processing Systems 8, pages 1052?1058. MIT Press, 1995.
[8] Morteza Ibrahimi, Adel Javanmard, and Benjamin Van Roy. Efficient reinforcement learning
for high dimensional linear quadratic systems. In NIPS, 2012.
[9] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 11:1563?1600, 2010.
[10] Sham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University
College London, 2003.
[11] Narendra Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4):373?396, 1984.
[12] Michael J. Kearns and Daphne Koller. Efficient reinforcement learning in factored MDPs. In
IJCAI, pages 740?747, 1999.
[13] Michael J. Kearns and Satinder P. Singh. Near-optimal reinforcement learning in polynomial
time. Machine Learning, 49(2-3):209?232, 2002.
[14] Tor Lattimore, Marcus Hutter, and Peter Sunehag. The sample-complexity of general reinforcement learning. In ICML, 2013.
[15] Lihong Li and Michael Littman. Reducing reinforcement learning to kwik online regression.
Annals of Mathematics and Artificial Intelligence, 2010.
[16] Lihong Li, Michael L. Littman, and Thomas J. Walsh. Knows what it knows: a framework for
self-aware learning. In ICML, pages 568?575, 2008.
[17] Ronald Ortner and Daniil Ryabko. Online regret bounds for undiscounted continuous reinforcement learning. In NIPS, 2012.
[18] Warren Powell and Ilya Ryzhov. Optimal Learning. John Wiley and Sons, 2011.
[19] G. A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technical
report, 1994.
[20] Daniel Russo and Benjamin Van Roy. Learning to optimize via posterior sampling. CoRR,
abs/1301.2609, 2013.
[21] Satinder P. Singh, Tommi Jaakkola, and Michael I. Jordan. Reinforcement learning with soft
state aggregation. In NIPS, pages 361?368, 1994.
[22] Er L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. PAC modelfree reinforcement learning. In Proceedings of the 23rd international conference on Machine
learning, pages 881?888, 2006.
[23] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press,
March 1998.
[24] Csaba Szepesv?ari. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial
Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010.
[25] John N. Tsitsiklis and Benjamin Van Roy. Feature-based methods for large scale dynamic
programming. Machine Learning, 22(1-3):59?94, 1996.
[26] Benjamin Van Roy. Performance loss bounds for approximate value iteration with state aggregation. Math. Oper. Res., 31(2):234?244, 2006.
9
| 4972 |@word exploitation:4 briefly:2 polynomial:6 c0:2 open:1 q1:2 incurs:1 initial:2 contains:2 selecting:1 daniel:1 denoting:1 past:2 current:2 import:1 must:1 john:4 realize:1 ronald:3 numerical:2 subsequent:3 partition:3 wiewiora:1 designed:2 update:4 greedy:1 selected:2 intelligence:3 short:1 provides:2 math:1 sigmoidal:1 daphne:1 along:1 predecessor:1 fitting:1 emma:1 finitehorizon:1 theoretically:1 javanmard:1 planning:2 growing:1 terminal:1 yasin:1 decreasing:1 little:1 cardinality:1 ryzhov:1 becomes:1 begin:2 provided:4 bounded:3 maximizes:1 agnostic:9 what:1 q2:2 developed:1 csaba:2 guarantee:9 every:1 tie:1 control:3 bertsekas:1 positive:4 treat:1 modify:1 limit:1 aiming:1 despite:1 sutton:1 noteworthy:1 approximately:1 might:1 therein:2 suggests:1 mentioning:1 limited:1 walsh:1 russo:1 practical:5 regret:28 powell:1 episodic:2 area:2 pre:7 word:1 dime:20 close:2 selection:2 put:1 context:9 restriction:1 optimize:1 deterministic:8 maximizing:1 straightforward:2 attention:2 qc:13 factored:3 insight:2 rule:1 communicating:1 notion:2 variation:1 annals:1 qh:6 suppose:1 programming:4 us:1 hypothesis:30 synthesize:2 roy:5 element:4 approximated:1 ibrahimi:1 observed:5 ft:2 solved:1 worst:1 zhengwen:1 episode:39 ryabko:1 valuable:1 alessandro:1 benjamin:5 environment:1 complexity:17 reward:12 littman:3 dynamic:2 depend:5 solving:4 weakly:1 singh:2 deliver:5 upon:1 efficiency:12 eric:1 basis:2 k0:1 represented:4 various:1 effective:4 london:1 artificial:3 eluder:9 outcome:1 quite:1 stanford:4 solve:3 larger:1 say:1 otherwise:1 online:6 sextuple:1 advantage:1 sequence:9 propose:2 interaction:1 relevant:1 ocp:43 kh:3 asserts:2 convergence:2 ijcai:1 undiscounted:2 produce:1 andrew:1 qt:13 received:1 progress:3 strong:1 involves:1 come:1 implies:2 quantify:1 tommi:1 direction:3 closely:1 exploration:11 implementing:1 require:4 generalization:9 proposition:8 sarsa:1 hold:5 claypool:1 mapping:3 pointing:2 narendra:1 tor:1 realizes:1 currently:2 establishes:3 mit:2 supt:1 aim:2 modified:1 rather:1 avoid:1 barto:1 jaakkola:1 corollary:9 derived:2 focus:2 june:1 longest:1 properly:1 polyhedron:1 attains:2 dependent:2 typically:1 bt:1 lj:7 rhd:1 koller:1 selects:4 provably:3 arg:2 aforementioned:4 among:2 proposes:1 special:6 initialize:1 equal:2 aware:2 sampling:1 broad:1 icml:2 future:1 connectionist:1 report:1 gordon:1 richard:1 few:1 wen:1 ortner:3 delayed:1 consisting:1 ab:2 interest:4 zheng:1 introduces:1 semidefinite:4 implication:1 kt:3 experience:4 accommodated:1 re:1 theoretical:2 weaken:1 fitted:3 hutter:1 instance:2 earlier:1 soft:1 assignment:1 cost:5 subset:4 kq:1 daniil:1 optimally:1 characterize:1 international:1 michael:6 synthesis:1 ilya:1 polytopic:3 again:1 abbasi:1 thesis:1 leveraged:1 slowly:1 possibly:1 henceforth:2 priority:1 admit:1 dimitri:1 return:1 li:3 oper:1 potential:1 coefficient:1 root:1 optimistic:9 sup:8 start:1 sort:1 maintains:1 aggregation:4 contribution:1 appended:1 square:1 efficiently:3 ronen:1 generalize:2 produced:2 worth:3 history:1 influenced:1 maxa2a:2 associated:1 proof:4 couple:2 popular:2 knowledge:7 auer:2 supervised:1 though:7 generality:1 furthermore:9 until:1 langford:1 hand:3 eqn:4 propagation:8 lack:1 aj:11 scientific:1 grows:5 true:5 hence:1 assigned:1 regularization:1 q0:6 jaksch:1 attractive:1 self:1 covering:2 d4:2 modelfree:1 complete:2 demonstrate:1 mohammad:1 delivers:2 lattimore:1 novel:2 ari:2 sigmoid:2 specialized:1 behaves:1 rl:1 exponentially:4 jl:1 belong:1 discussed:3 approximates:1 extend:1 refer:2 rd:1 mathematics:1 rasa:5 lihong:3 entail:3 v0:7 add:1 kwik:4 posterior:1 recent:3 perspective:2 inf:1 discard:1 scenario:3 inequality:2 arbitrarily:1 vt:4 postponed:1 morgan:1 tabula:5 contingent:1 relaxed:1 performer:1 period:15 redundant:4 u0:4 arithmetic:1 multiple:1 desirable:1 sham:1 reduces:2 technical:2 characterized:2 concerning:1 niranjan:1 variant:3 regression:3 q2q:2 basic:2 neuro:1 iteration:2 kernel:2 represent:2 c1:2 szepesv:2 interval:1 else:1 grow:3 publisher:1 sure:1 subject:1 inconsistent:1 jordan:1 near:7 backwards:1 xj:25 fit:2 restrict:2 identified:1 suboptimal:3 knowing:1 whether:1 expression:1 optimism:1 bartlett:1 adel:1 peter:4 action:33 repeatedly:2 useful:2 generally:1 clear:2 tewari:1 ph:2 tth:1 notice:2 disjoint:8 per:5 track:1 lazaric:1 discrete:1 threshold:2 enormous:2 parameterized:1 uncertainty:2 throughout:1 reasonable:1 ruling:1 decision:4 ble:1 appendix:4 bit:2 bound:32 quadratic:12 encountered:1 constraint:33 infinity:1 generates:1 u1:1 span:9 min:4 combination:5 march:1 terminates:1 smaller:1 son:1 partitioned:1 lp:3 kakade:1 computationally:5 remains:2 discus:5 count:1 nonempty:1 know:3 end:8 operation:7 apply:11 appropriate:1 yadkori:1 thomas:2 denotes:2 assumes:1 include:1 ensure:1 completed:1 restrictive:1 k1:1 uj:7 establish:8 classical:1 realized:1 moshe:1 dependence:2 rt:14 interacts:2 said:3 exhibit:1 gradient:1 september:1 subspace:2 distance:2 athena:1 gracefully:2 bvr:1 polytope:7 marcus:1 length:1 index:1 difficult:1 potentially:1 relate:2 rise:1 constructively:1 design:4 zt:6 boltzmann:1 policy:8 upper:6 observation:2 finite:19 immediate:2 defining:1 extended:1 sharp:1 regal:1 introduced:1 pair:2 specified:8 conflict:2 coherent:4 quadratically:1 tractably:2 nip:4 address:2 gheshlaghi:1 tennenholtz:1 below:1 sparsity:1 challenge:1 summarize:1 program:1 ambuj:1 including:1 max:1 greatest:1 critical:1 event:1 difficulty:1 treated:1 natural:2 indicator:7 scheme:2 older:2 rummery:1 mdps:4 started:1 prior:11 literature:5 evolve:1 asymptotic:2 loss:9 lecture:1 highlight:1 interesting:8 limitation:1 geoffrey:1 triple:1 incurred:1 agent:5 propagates:1 principle:1 strehl:1 course:1 brafman:1 free:1 jth:3 tsitsiklis:2 sunehag:1 jh:1 warren:1 face:1 sparse:1 van:5 feedback:1 dimension:14 complicating:1 transition:6 computes:1 author:2 collection:1 reinforcement:40 made:3 commonly:1 adaptive:1 approximate:1 compact:2 preferred:2 satinder:2 continuous:3 quantifies:1 promising:1 learn:5 synthesizes:1 hc:2 necessarily:1 domain:1 linearly:3 rh:2 azar:1 body:2 advice:1 referred:1 wiley:1 sub:2 position:1 brunskill:1 exponential:1 lq:3 lie:9 candidate:1 theorem:10 transitioning:1 xt:1 discarding:1 pac:1 er:1 intractable:5 exists:3 corr:2 phd:1 horizon:11 morteza:1 supa2a:2 intersection:1 logarithmic:1 ez:2 scalar:1 applies:2 determines:1 goal:1 consequently:3 feasible:1 specifically:4 determined:1 typical:1 reducing:1 lemma:2 kearns:2 total:1 called:1 exception:1 select:2 college:1 combinatorica:1 relevance:2 constructive:1 karmarkar:3 |
4,389 | 4,973 | Aggregating Optimistic Planning Trees for Solving
Markov Decision Processes
Gunnar Kedenburg
INRIA Lille - Nord Europe / idalab GmbH
[email protected]
Rapha?l Fonteneau
University of Li?ge / INRIA Lille - Nord Europe
[email protected]
R?mi Munos
INRIA Lille - Nord Europe / Microsoft Research New England
[email protected]
Abstract
This paper addresses the problem of online planning in Markov decision processes
using a randomized simulator, under a budget constraint. We propose a new
algorithm which is based on the construction of a forest of planning trees, where
each tree corresponds to a random realization of the stochastic environment. The
trees are constructed using a ?safe? optimistic planning strategy combining the
optimistic principle (in order to explore the most promising part of the search
space first) with a safety principle (which guarantees a certain amount of uniform
exploration). In the decision-making step of the algorithm, the individual trees are
aggregated and an immediate action is recommended. We provide a finite-sample
analysis and discuss the trade-off between the principles of optimism and safety.
We also report numerical results on a benchmark problem. Our algorithm performs
as well as state-of-the-art optimistic planning algorithms, and better than a related
algorithm which additionally assumes the knowledge of all transition distributions.
1
Introduction
Adaptive decision making algorithms have been used increasingly in the past years, and have attracted
researchers from many application areas, like artificial intelligence [16], financial engineering [10],
medicine [14] and robotics [15]. These algorithms realize an adaptive control strategy through
interaction with their environment, so as to maximize an a priori performance criterion.
A new generation of algorithms based on look-ahead tree search techniques have brought a breakthrough in practical performance on planning problems with large state spaces. Techniques based on
planning trees such as Monte Carlo tree search [4, 13], and in particular the UCT algorithm (UCB
applied to Trees, see [12]) have allowed to tackle large scale problems such as the game of Go [7].
These methods exploit that in order to decide on an action at a given state, it is not necessary to build
an estimate of the value function everywhere. Instead, they search locally in the space of policies,
around the current state.
We propose a new algorithm for planning in Markov Decision Problems (MDPs). We assume that
a limited budget of calls to a randomized simulator for the MDP (the generative model in [11]) is
available for exploring the consequences of actions before making a decision. The intuition behind
our algorithm is to achieve a high exploration depth in the look-ahead trees by planning in fixed
realizations of the MDP, and to achieve the necessary exploration width by aggregating a forest of
planning trees (forming an approximation of the MDP from many realizations). Each of the trees
is developed around the state for which a decision has to be made, according to the principle of
optimism in the face of uncertainty [13] combined with a safety principle.
1
We provide a finite-sample analysis depending on the budget, split into the number of trees and
the number of node expansions in each tree. We show that our algorithm is consistent and that it
identifies the optimal action when given a sufficiently large budget. We also give numerical results
which demonstrate good performance on a benchmark problem. In particular, we show that our
algorithm achieves much better performance on this problem than OP-MDP [2] when both algorithms
generate the same number of successor states, despite the fact that OP-MDP assumes knowledge
of all successor state probabilities in the MDP, whereas our algorithm only samples states from a
simulator.
The paper is organized as follows: first, we discuss some related work in section 2. In section 3, the
problem addressed in this paper is formalized, before we describe our algorithm in section 4. Its
finite-sample analysis is given in section 5. We provide numerical results on the inverted pendulum
benchmark in section 6. In section 7, we discuss and conclude this work.
2
Related work
The optimism in the face of uncertainty paradigm has already lead to several successful results
for solving decision making problems. Specifically, it has been applied in the following contexts:
multi-armed bandit problems [1] (which can be seen as single state MDPs), planning algorithms
for deterministic systems and stochastic systems [8, 9, 17], and global optimization of stochastic
functions that are only accessible through sampling. See [13] for a detailed review of the optimistic
principle applied to planning and optimization.
The algorithm presented in this paper is particularly closely related to two recently developed online
planning algorithms for solving MDPs, namely the OPD algorithm [9] for MDPs with deterministic
transitions, and the OP-MDP algorithm [2] which addresses stochastic MDPs where all transition
probabilities are known. A Bayesian adaptation of OP-MDP has also been proposed [6] for planning
in the context where the MDP is unknown.
Our contribution is also related to [5], where random ensembles of state-action independent disturbance scenarios are built, the planning problem is solved for each scenario, and a decision is made
based on majority voting. Finally, since our algorithm proceeds by sequentially applying the first
decision of a longer plan over a receding horizon, it can also be seen as a Model Predictive Control
[3] technique.
3
Formalization
Let (S, A, p, r, ?) be a Markov decision process (MDP), where the set S and A respectively denote
the state space and the finite action space, with |A| > 1, of the MDP. When an action a ? A is
selected in state s ? S of the MDP, it transitions to a successor state s0 ? S(s, a) with probability
p(s0 |s, a). We further assume that every successor state set S(s, a) is finite and their cardinality
is bounded by K ? N. Associated with the transition is an deterministic instantaneous reward
r(s, a, s0 ) ? [0, 1].
While the transition probabilities may be unknown, it is assumed that a randomized simulator is
available, which, given a state-action pair (s, a), outputs a successor state s0 ? p(?|s, a). The ability
to sample is a weaker assumption than the knowledge of all transition probabilities. In this paper we
consider the problem of planning under a budget constraint: only a limited number of samples may
be drawn using the simulator. Afterwards, a single decision has to be made.
Let ? : S ? A denote a deterministic policy. Define the value function of the policy ? in a state s as
the discounted sum of expected rewards:
"?
#
X
?
?
t
v : S ? R, v : s 7? E
? r(st , ?(st ), st+1 ) s0 = s ,
(1)
t=0
where the constant ? ? (0, 1) is called the discount factor. Let ? ? be an optimal policy (i.e. a policy
?
that maximizes v ? in all states). It is well known that the optimal value function v ? := v ? is the
2
solution to the Bellman equation
?s ? S : v ? (s) = max
a?A
X
p(s0 |s, a) (r(s, a, s0 ) + ?v ? (s0 )) .
s0 ?S(s,a)
P
0
0
? 0
Given the action-value function Q? : (s, a) 7?
s0 ?S(s,a) p(s |s, a)(r(s, a, s ) + ?v (s )), an
?
?
optimal policy can be derived as ? : s 7? argmaxa?A Q (s, a).
4
Algorithm
We name our algorithm ASOP (for ?Aggregated Safe Optimistic Planning?). The main idea behind it
is to use a simulator to obtain a series of deterministic ?realizations? of the stochastic MDP, to plan
in each of them individually, and to then aggregate all the information gathered in the deterministic
MDPs into an empirical approximation to the original MDP, on the basis of which a decision is made.
We refer to the planning trees used here as single successor state trees (S3-trees), in order to distinguish
them from other planning trees used for the same problem (e.g. the OP-MDP tree, where all possible
successor states are considered). Every node of a S3-tree represents a state s ? S, and has at most
one child node per state-action a, representing a successor state s0 ? S. The successor state is drawn
using the simulator during the construction of the S3-tree.
The planning tree construction, using the SOP algorithm (for ?Safe Optimistic Planning?), is described
in section 4.1. The ASOP algorithm, which integrates building the forest and deciding on an action
by aggregating the information in the forest, is described in section 4.2.
4.1
Safe optimistic planning in S3-trees: the SOP algorithm
SOP is an algorithm for sequentially constructing a S3-tree. It can be seen as a variant of the OPD
algorithm [9] for planning in deterministic MDPs. SOP expands up to two leaves of the planning tree
per iteration. The first leaf (the optimistic one) is a maximizer of an upper bound (called b-value) on
the value function of the (deterministic) realization of the MDP explored in the S3-tree. The b-value
of a node x is defined as
d(x)?1
X
? d(x)
b(x) :=
? i ri +
(2)
1??
i=0
where (ri ) is the sequence of rewards obtained along the path to x, and d(x) is the depth of the node
(the length of the path from the root to x). Only expanding the optimistic leaf would not be enough
to make ASOP consistent; this is shown in the appendix. Therefore, a second leaf (the safe one),
defined as the shallowest leaf in the current tree, is also expanded in each iteration. A pseudo-code is
given as algorithm 1.
Algorithm 1: SOP
Data: The initial state s0 ? S and a budget n ? N
Result: A planning tree T
Let T denote a tree consisting only of a leaf, representing s0 .
Initialize the cost counter c := 0.
while c < n do
Form a subset of leaves of T , L, containing a leaf of minimal depth, and a leaf of maximal b-value
(computed according to (2); the two leaves can be identical).
foreach l ? L do
Let s denote the state represented by l.
foreach a ? A do
if c < n then
Use the simulator to draw a successor state s0 ? p(?|s, a).
Create an edge in T from l to a new leaf representing s0 .
Let c := c + 1.
return T
3
4.2
Aggregation of S3-trees: the ASOP algorithm
ASOP consists of three steps. In the first step, it runs independent instances of SOP to collect
information about the MDP, in the form of a forest of S3-trees. It then computes action-values
? ? (s0 , a) of a single ?empirical? MDP based on the collected information, in which states are
Q
represented by forests: on a transition, the forest is partitioned into groups by successor states, and
the corresponding frequencies are taken as the transition probabilities. Leaves are interpreted as
absorbing states with zero reward on every action, yielding a trivial lower bound. A pseudo-code for
this computation is given as algorithm 2. ASOP then outputs the action
? ? (s0 , a).
?
? (s0 ) ? argmax Q
a?A
The optimal policy of the empirical MDP has the property that the empirical lower bound of its value,
computed from the information collected by planning in the individual realizations, is maximal over
the set of all policies. We give a pseudo-code for the ASOP algorithm as algorithm 3.
Algorithm 2: ActionValue
Data: A forest F and an action a, with each tree in F representing the same state s
Result: An empirical lower bound for the value of a in s
Let E denote the edges representing action a at any of the root nodes of F .
if E = ? then
return 0
else
Let F be the set of trees pointed to by the edges in E.
Enumerate the states represented by any tree in F by {s0i : i ? I} for some finite I.
foreach i ? I do
Denote the set of trees in F which represent si by Fi .
Let ??i := maxa0 ?A ActionValue(Fi , a0 ).
Let p?i := |Fi |/|F |.
P
return i?I p?i (r(s, a, s0i ) + ? ??i )
Algorithm 3: ASOP
Data: The initial state s0 , a per-tree budget b ? N and the forest size m ? N
Result: An action to take
for i = 1, . . . , m do
Let Ti := SOP(s0 , b).
return argmaxa?A ActionValue({T1 , . . . , Tm }, a)
5
Finite-sample analysis
In this section, we provide a finite-sample analysis of ASOP in terms of the number of planning trees
m and per-tree budget n. An immediate consequence of this analysis is that ASOP is consistent: the
action returned by ASOP converges to the optimal action when both n and m tend to infinity.
Our loss measure is the ?simple? regret, corresponding to the expected value of first playing the action
?
? (s0 ) returned by the algorithm at the initial state s0 and acting optimally from then on, compared to
acting optimally from the beginning:
Rn,m (s0 ) = Q? (s0 , ? ? (s0 )) ? Q? (s0 , ?
? (s0 )).
First, let us use the ?safe? part of SOP to show that each S3-tree is fully explored up to a certain depth
d when given a sufficiently large per-tree budget n.
d+1
Lemma 1. For any d ? N, once a budget of n ? 2|A| |A||A|?1?1 has been spent by SOP on an S3-tree,
the state-actions of all nodes up and including those at depth d have all been sampled exactly once.
4
d+1
Pd
Proof. A complete |A|-ary tree contains |A|l nodes in level l, so it contains l=0 |A|l = |A||A|?1?1
nodes up to and including level d. In each of these nodes, |A| actions need to be explored. We
complete the proof by noticing that SOP spends at least half of its budget on shallowest leaves.
?
Let v?? and v?,n
denote the value functions for a policy ? in the infinite, completely explored S3-tree
defined by a random realization ? and the finite S3-tree constructed by SOP for a budget of n in the
same realization ?, respectively. From Lemma 1 we deduce that if the per-tree budget is at least
log |A|
|A|
?
[(1 ? ?)] log(1/?) .
|A| ? 1
P
d+1
?
?
we obtain |v?? (s0 ) ? v?,n
(s0 )| ? i=d+1 ? i ri ? ?1?? ? for any policy ?.
n?2
(3)
ASOP aggregates the trees and computes the optimal policy ?
? of the resulting empirical MDP whose
transition probabilities are defined by the frequencies (over the m S3-trees) of transitions from
state-action to successor states. Therefore, ?
? is actually a policy maximizing the function
m
1 X ?
??
7
v
(s0 ).
m i=1 ?i ,n
(4)
If the number m of S3-trees and the per-tree budget n are large, we therefore expect the optimal
policy ?
? of the empirical MDP to be close to the optimal policy ? ? of the true MDP. This is the result
stated in the following theorem.
Theorem 1. For any ? ? (0, 1) and ? (0, 1), if the number of S3-trees is at least
log K
i? log(1/?)
8
K h
m? 2
(1 ? ?)
log |A|
+ log(4/?)
(5)
(1 ? ?)2
K ?1 4
and the per-tree budget is at least
log |A|
n?2
i? log(1/?)
|A| h
(1 ? ?)
,
|A| ? 1 4
(6)
then P (Rm,n (s0 ) < ) ? 1 ? ?.
Proof. Let ? ? (0, 1), and ? (0, 1) and fix realizations {?1 , . . . , ?m } of the stochastic MDP, for
some m satisfying (5). Each realization ?i corresponds to an infinite, completely explored S3-tree.
Let n denote some per-tree budget satisfying (6).
Analogously to (3), we know from Lemma 1 that, given our choice of n, SOP constructs trees which
d+1
are completely explored up to depth d := b log((1??)/4)
c, fulfilling ?1?? ? 4 .
log ?
Consider the following truncated value functions: let ?d? (s0 ) denote the sum of expected discounted
rewards obtained in the original MDP when following policy ? for d steps and then receiving reward
zero from there on, and let ???i ,d (s0 ) denote the analogous quantity in the MDP corresponding to
realization ?i .
Pm ?
Pm ?
1
1
?
?
:= m
:= m
Define, for all policies ?, the quantities v?m,n
?m,d
i=1 v?i ,n (s0 ) and ?
i=1 ??i ,d (s0 ).
Since the trees are complete up to level d and the rewards are non-negative, we deduce that we have
0 ? v??i ,n ? ???i ,d ? 4 for each i and each policy ?, thus the same will be true for the averages:
?
?
0 ? v?m,n
? ??m,d
?
4
??.
(7)
?
Notice that ?d? (s0 ) = E? [??,d
(s0 )]. From the Chernoff-Hoeffding inequality, we have that for any
1
fixed policy ? (since the truncated values lie in [0, 1??
]),
2
2
?
P |?
?m,d
? ?d? (s0 )| ? 4 ? 2e?m (1??) /8 .
Now we need a uniform bound over the set of all possible policies. The number of distinct policies is
d
|A| ? |A|K ? ? ? ? ? |A|K (at each level l, there are at most K l states that can be reached by following a
5
l
policy at previous levels,
so there are |A|K different
choices that policies can make at level l). Thus
h d+1
i
8
K
4
since m ? 2 (1??)2 K?1 log |A| + log( ? ) we have
?
P max |?
?m,d
? ?d? (s0 )| ? 4 ? 2? .
(8)
?
?
The action returned by ASOP is ?
? (s0 ), where ?
? := argmax? v?m,n
.
Finally, it follows that with probability at least 1 ? ?:
Rn,m (s0 ) = Q? (s0 , ? ? (s0 )) ? Q? (s0 , ?
? (s0 )) ? v ? (s0 ) ? v ?? (s0 )
?
?
?
= v ? (s0 ) ? v?m,n
+
?0,
??
? v (s0 ) ?
{z
|
?/4,
?
?
?
v?m,n
? v?m,n
{z
}
|
by definition of ??
?
?
?d? (s0 ) + ?d? (s0 )
}
by truncation
|
?
{z
?/4,
?
?
?
?
?
?
+ v?m,n
? ??m,d
+ ??m,d
? ?d?? (s0 ) + ?d?? (s0 ) ? v ?? (s0 )
{z
}
|
{z
} |
{z
} |
??
??m,d
by (8)
}
?/4,
??
+ ??m,d
|
?0,
by (7)
?
{z
?/4,
??
v?m,n
by (7)
+ 2
by (8)
?0,
by truncation
?
}
Remark 1. The total budget (nm) required to return an -optimal action with high probability is
log(K|A|)
thus of order ?2? log(1/?) . Notice that this rate is poorer (by a ?2 factor) than the rate obtained
for uniform planning in [2]; this is a direct consequence of the fact that we are only drawing samples,
whereas a full model of the transition probabilities is assumed in [2].
Remark 2. Since there is a finite number of actions, by denoting ? > 0 the optimality gap between
the best and the second-best optimal action values, we have that the optimal arm is identified (in high
log(K|A|)
probability) (i.e. the simple regret is 0) after a total budget of order ??2? log(1/?) .
Remark 3. The optimistic part of the algorithm allows a deep exploration of the MDP. At the same
time, it biases the expression maximized by ?
? in (4) towards near-optimal actions of the deterministic
realizations. Under the assumptions of theorem 1, the bias becomes insignificant.
Remark 4. Notice that we do not use the optimistic properties of the algorithm in the analysis. The
analysis only uses the ?safe? part of the SOP planning, i.e. the fact that one sample out of two are
devoted to expanding the shallowest nodes. An analysis of the benefit of the optimistic part of the
algorithm, similar to the analyses carried out in [9, 2] would be much more involved and is deferred
to a future work. However the impact of the optimistic part of the algorithm is essential in practice,
as shown in the numerical results.
6
Numerical results
In this section, we compare the performance of ASOP to OP-MDP [2], UCT [12], and FSSS [17]. We
use the (noisy) inverted pendulum benchmark problem from [2], which consists of swinging up and
stabilizing a weight attached to an actuated link that rotates in a vertical plane. Since the available
power is too low to push the pendulum up in a single rotation from the initial state, the pendulum has
to be swung back and forth to gather energy, prior to being pushed up and stabilized.
The inverted pendulum is described by the state variables (?, ?)
? ? [??, ?] ? [?15, 15] and the
differential equation ?
? = (mgl sin(?) ? b?? ? K(K ?? + u)/R) /J, where J = 1.91 ? 10?4 kg ? m2 ,
m = 0.055 kg, g = 9.81 m/s2 , l = 0.042 m, b = 3 ? 10?6 Nm ? s/rad, K = 0.0536 Nm/A, and
R = 9.5 ?. The state variable ?? is constrained to [?15, 15] by saturation. The discrete time problem
is obtained by mapping actions from A = {?3V, 0V, 3V} to segments of a piecewise control signal
u, each 0.05s in duration, and then numerically integrating the differential equation on the constant
segments using RK4. The actions are applied stochastically: with probability 0.6, the intended
voltage is applied in the control signal, whereas with probability 0.4, the smaller voltage 0.7a is
applied. The goal is to stabilize the pendulum in the unstable equilibrium s? = (0, 0) (pointing up, at
rest) when starting from state (??, 0) (pointing down, at rest). This goal is expressed by the penalty
function (s, a, s0 ) 7? ?5?02 ? 0.1?? 02 ? a2 , where s0 = (?0 , ?? 0 ). The reward function r is obtained
by scaling and translating the values of the penalty function so that it maps to the interval [0, 1], with
r(s, 0, s? ) = 1. The discount factor is set to ? = 0.95.
6
OP-MDP
UCT 0.2 7
FSSS 1 7
FSSS 2 7
FSSS 3 7
ASOP 2
ASOP 3
Sum of discounted rewards
17
16.8
16.6
16.4
16.2
16
15.8
102
103
104
Calls to the simulator per step
Figure 1: Comparison of ASOP to OP-MDP, UCT, and FSSS on the inverted pendulum benchmark
problem, showing the sum of discounted rewards for simulations of 50 time steps.
The algorithms are compared for several budgets. In the cases of ASOP, UCT, and FSSS, the budget
is in terms of calls to the simulator. OP-MDP does not use a simulator. Instead, every possible
successor state is incorporated into the planning tree, together with its precise probability mass, and
each of these states is counted against the budget. As the benchmark problem is stochastic, and
internal randomization (for the simulator) is used in all algorithms except OP-MDP, the performance
is averaged over 50 repetitions. The algorithm parameters have been selected manually to achieve
good performance. For ASOP, we show results for forest sizes of two and three. For UCT, the
Chernoff-Hoeffding term multiplier is set to 0.2 (the results are not very sensitive in the value,
therefore only one result is shown). For FSSS, we use one to three samples per state-action. For
both UCT and FSSS, a rollout depth of seven is used. OP-MDP does not have any parameters. The
results are shown in figure 1. We observe that on this problem, ASOP performs much better than
OP-MDP for every value of the budget, and also performs well in comparison to the other sampling
based methods, UCT and FSSS.
Figure 2 shows the impact of optimistic planning on the performance of our aggregation method. For
forest sizes of both one and three, optimistic planning leads to considerably increased performance.
This is due to the greater planning depth in the lookahead tree when using optimistic exploration.
For the case of a single tree, performance decreases (presumably due to overfitting) on the stochastic
problem for increasing budget. The effect disappears when more than one tree is used.
7
Conclusion
We introduced ASOP, a novel algorithm for solving online planning problems using a (randomized)
simulator for the MDP, under a budget constraint. The algorithm works by constructing a forest
of single successor state trees, each corresponding to a random realization of the MDP transitions.
Each tree is constructed using a combination of safe and optimistic planning. An empirical MDP
is defined, based on the forest, and the first action of the optimal policy of this empirical MDP is
returned. In short, our algorithm targets structured problems (where the value function possesses
some smoothness property around the optimal policies of the deterministic realizations of the MDP, in
a sense defined e.g. in [13]) by using the optimistic principle to focus rapidly on the most promising
area(s) of the search space. It can also find a reasonable solution in unstructured problems, since some
of the budget is allocated for uniform exploration. ASOP shows good performance on the inverted
pendulum benchmark. Finally, our algorithm is also appealing in that the numerically heavy part of
constructing the planning trees, in which the simulator is used, can be performed in a distributed way.
7
Sum of discounted rewards
17
Safe+Optimistic 1
Safe+Optimistic 3
Safe 1
Safe 3
Optimistic 1
Optimistic 3
16.5
16
15.5
101
102
103
104
Calls to the simulator per step
Figure 2: Comparison of different planning strategies (on the same problem as in figure 1). The
?Safe? strategy is to use uniform planning in the individual trees, the ?Optimistic? strategy is to use
OPD. ASOP corresponds to the ?Safe+Optimistic? strategy.
Acknowledgements
We acknowledge the support of the BMBF project ALICE (01IB10003B), the European Community?s
Seventh Framework Programme FP7/2007-2013 under grant no 270327 CompLACS and the Belgian
PAI DYSCO. Rapha?l Fonteneau is a post-doctoral fellow of the F.R.S. - FNRS. We also thank Lucian
Busoniu for sharing his implementation of OP-MDP.
Appendix: Counterexample to consistency when using purely optimistic planning in S3-trees
Consider the MDP in figure 3 with k zero reward transitions in the middle branch, where ? ? (0, 1)
and k ? N are chosen such that 12 > ? k > 13 (e.g. ? = 0.95 and k = 14). The trees are constructed
iteratively, and every iteration consists of exploring a leaf of maximal b-value, where exploring a
leaf means introducing a single successor state per action at the selected leaf. The state-action values
?k
1
1
1
1
1
are: Q? (x, a) = 31 1??
+ 32 1??
> 31 1??
+ 23 13 1??
= 95 1??
and Q? (x, b) = 12 1??
. There are two
1
possible outcomes when sampling the action a, which occur with probabilities 3 and 23 , respectively:
Outcome I: The upper branch of action a is sampled. In this case, the contribution to the forest is an
arbitrarily long reward 1 path for action a, and a finite reward 12 path for action b.
k
?
1
Outcome II: The lower branch of action a is sampled. Because 1??
< 21 1??
, the lower branch will
be explored only up to k times, as its b-value is then lower than the value (and therefore any b-value)
of action b. The contribution of this case to the forest is a finite reward 0 path for action a and an
arbitrary long (depending on the budget) reward 12 path for action b.
For an increasing exploration budget per tree and an increasing number of trees, the approximate
1
1
action values of action a and b obtained by aggregation converge to 13 1??
and 12 1??
, respectively.
Therefore, the decision rule will select action b for a sufficiently large budget, even though a is the
1 1
optimal action. This leads to simple regret of R(x) = Q? (x, a) ? Q? (x, b) > 18
1?? .
s0
a
1
3
1
1
2
3
0
0
...
1
1
1
0
1
1
1
2
1
2
1
2
...
b
1
2
...
1
2
. . . (I)
. . . (II)
...
Figure 3: The middle branch (II) of this MDP is never explored deep enough if only the node with
the largest b-value is sampled in each iteration. Transition probabilities are given in gray where not
equal to one.
8
References
[1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of multiarmed bandit problems.
Machine Learning, 47:235?256, 2002.
[2] L. Busoniu and R. Munos. Optimistic planning for Markov decision processes. In International
Conference on Artificial Intelligence and Statistics (AISTATS), JMLR W & CP 22, pages
182?189, 2012.
[3] E. F. Camacho and C. Bordons. Model Predictive Control. Springer, 2004.
[4] R. Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. Computers
and Games, pages 72?83, 2007.
[5] B. Defourny, D. Ernst, and L. Wehenkel. Lazy planning under uncertainty by optimizing
decisions on an ensemble of incomplete disturbance trees. In Recent Advances in Reinforcement
Learning - European Workshop on Reinforcement Learning (EWRL), pages 1?14, 2008.
[6] R. Fonteneau, L. Busoniu, and R. Munos. Optimistic planning for belief-augmented Markov
decision processes. In IEEE International Symposium on Adaptive Dynamic Programming and
Reinforcement Learning (ADPRL), 2013.
[7] S. Gelly, Y. Wang, R. Munos, and O. Teytaud. Modification of UCT with patterns in MonteCarlo go. Technical report, INRIA RR-6062, 2006.
[8] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of
minimum cost paths. Systems Science and Cybernetics, IEEE Transactions on, 4(2):100?107,
1968.
[9] J. F. Hren and R. Munos. Optimistic planning of deterministic systems. Recent Advances in
Reinforcement Learning, pages 151?164, 2008.
[10] J. E. Ingersoll. Theory of Financial Decision Making. Rowman and Littlefield Publishers, Inc.,
1987.
[11] M. Kearns, Y. Mansour, and A. Y. Ng. A sparse sampling algorithm for near-optimal planning
in large Markov decision processes. Machine Learning, 49(2-3):193?208, 2002.
[12] L. Kocsis and C. Szepesv?ri. Bandit based Monte-Carlo planning. Machine Learning: ECML
2006, pages 282?293, 2006.
[13] R. Munos. From bandits to Monte-Carlo Tree Search: The optimistic principle applied to
optimization and planning. To appear in Foundations and Trends in Machine Learning, 2013.
[14] S. A. Murphy. Optimal dynamic treatment regimes. Journal of the Royal Statistical Society,
Series B, 65(2):331?366, 2003.
[15] J. Peters, S. Vijayakumar, and S. Schaal. Reinforcement learning for humanoid robotics. In
IEEE-RAS International Conference on Humanoid Robots, pages 1?20, 2003.
[16] R. S. Sutton and A. G. Barto. Reinforcement Learning. MIT Press, 1998.
[17] T. J. Walsh, S. Goschin, and M. L. Littman. Integrating sample-based planning and model-based
reinforcement learning. In AAAI Conference on Artificial Intelligence, 2010.
9
| 4973 |@word middle:2 simulation:1 initial:4 ingersoll:1 series:2 contains:2 denoting:1 past:1 current:2 si:1 attracted:1 realize:1 numerical:5 camacho:1 intelligence:3 generative:1 selected:3 leaf:16 half:1 plane:1 beginning:1 short:1 mgl:1 node:12 teytaud:1 rollout:1 along:1 constructed:4 direct:1 differential:2 symposium:1 consists:3 ra:1 expected:3 planning:47 simulator:15 multi:1 bellman:1 kedenburg:2 discounted:5 armed:1 cardinality:1 increasing:3 becomes:1 project:1 bounded:1 maximizes:1 mass:1 kg:2 interpreted:1 spends:1 developed:2 guarantee:1 pseudo:3 fellow:1 every:6 voting:1 expands:1 tackle:1 ti:1 exactly:1 rm:1 control:5 grant:1 appear:1 safety:3 before:2 engineering:1 aggregating:3 t1:1 consequence:3 despite:1 sutton:1 path:7 inria:6 doctoral:1 collect:1 alice:1 limited:2 walsh:1 averaged:1 practical:1 practice:1 regret:3 area:2 empirical:9 lucian:1 integrating:2 argmaxa:2 close:1 operator:1 context:2 applying:1 deterministic:11 map:1 maximizing:1 go:2 fonteneau:4 starting:1 duration:1 swinging:1 stabilizing:1 formalized:1 unstructured:1 m2:1 rule:1 financial:2 his:1 analogous:1 construction:3 target:1 programming:1 us:1 trend:1 satisfying:2 particularly:1 solved:1 wang:1 trade:1 counter:1 decrease:1 intuition:1 environment:2 pd:1 reward:16 littman:1 dynamic:2 solving:4 segment:2 predictive:2 purely:1 basis:2 completely:3 represented:3 distinct:1 describe:1 fnrs:1 monte:4 artificial:3 aggregate:2 outcome:3 whose:1 heuristic:1 drawing:1 ability:1 statistic:1 fischer:1 noisy:1 online:3 kocsis:1 sequence:1 rr:1 propose:2 interaction:1 raphael:2 maximal:3 fr:2 adaptation:1 combining:1 realization:14 rapidly:1 ernst:1 achieve:3 lookahead:1 forth:1 converges:1 spent:1 depending:2 ac:1 op:13 safe:14 closely:1 stochastic:8 exploration:7 successor:15 translating:1 adprl:1 maxa0:1 fix:1 randomization:1 exploring:3 around:3 sufficiently:3 considered:1 deciding:1 presumably:1 equilibrium:1 mapping:1 pointing:2 achieves:1 a2:1 integrates:1 sensitive:1 individually:1 largest:1 repetition:1 create:1 brought:1 mit:1 shallowest:3 ewrl:1 voltage:2 barto:1 goschin:1 derived:1 focus:1 schaal:1 sense:1 a0:1 bandit:4 priori:1 plan:2 art:1 breakthrough:1 initialize:1 constrained:1 equal:1 once:2 construct:1 never:1 ng:1 sampling:4 manually:1 identical:1 represents:1 lille:3 look:2 pai:1 chernoff:2 hren:1 future:1 report:2 piecewise:1 individual:3 murphy:1 argmax:2 consisting:1 intended:1 microsoft:1 deferred:1 yielding:1 behind:2 devoted:1 poorer:1 edge:3 necessary:2 belgian:1 tree:70 incomplete:1 minimal:1 instance:1 increased:1 cost:2 introducing:1 subset:1 uniform:5 successful:1 seventh:1 too:1 ulg:1 optimally:2 considerably:1 combined:1 rapha:2 st:3 international:3 randomized:4 accessible:1 vijayakumar:1 off:1 receiving:1 complacs:1 analogously:1 together:1 aaai:1 nm:3 cesa:1 containing:1 hoeffding:2 stochastically:1 return:5 li:1 stabilize:1 inc:1 performed:1 root:2 optimistic:30 pendulum:8 reached:1 aggregation:3 contribution:3 ensemble:2 gathered:1 maximized:1 bayesian:1 carlo:4 researcher:1 cybernetics:1 ary:1 sharing:1 definition:1 against:1 energy:1 frequency:2 involved:1 associated:1 mi:1 proof:3 sampled:4 treatment:1 knowledge:3 organized:1 actually:1 back:1 auer:1 though:1 uct:9 maximizer:1 gray:1 mdp:41 building:1 effect:1 name:1 true:2 multiplier:1 iteratively:1 sin:1 game:2 width:1 during:1 criterion:1 complete:3 demonstrate:1 performs:3 cp:1 instantaneous:1 novel:1 recently:1 fi:3 absorbing:1 rotation:1 attached:1 foreach:3 numerically:2 refer:1 multiarmed:1 counterexample:1 smoothness:1 consistency:1 pm:2 pointed:1 robot:1 europe:3 longer:1 deduce:2 recent:2 optimizing:1 scenario:2 selectivity:1 certain:2 inequality:1 arbitrarily:1 inverted:5 seen:3 minimum:1 greater:1 converge:1 aggregated:2 maximize:1 paradigm:1 recommended:1 signal:2 branch:5 afterwards:1 full:1 ii:3 technical:1 england:1 determination:1 long:2 hart:1 post:1 impact:2 variant:1 opd:3 iteration:4 represent:1 robotics:2 whereas:3 szepesv:1 addressed:1 interval:1 else:1 allocated:1 publisher:1 rest:2 posse:1 tend:1 call:4 near:2 split:1 enough:2 identified:1 idea:1 tm:1 expression:1 optimism:3 penalty:2 f:9 peter:1 returned:4 action:46 remark:4 deep:2 enumerate:1 detailed:1 amount:1 discount:2 locally:1 generate:1 stabilized:1 s3:17 notice:3 per:14 discrete:1 group:1 gunnar:2 drawn:2 year:1 sum:5 run:1 everywhere:1 uncertainty:3 noticing:1 reasonable:1 decide:1 draw:1 decision:19 appendix:2 scaling:1 pushed:1 bound:5 distinguish:1 ahead:2 occur:1 constraint:3 infinity:1 ri:4 optimality:1 expanded:1 structured:1 according:2 combination:1 smaller:1 increasingly:1 partitioned:1 appealing:1 making:5 modification:1 nilsson:1 fulfilling:1 taken:1 equation:3 discus:3 montecarlo:1 know:1 ge:1 fp7:1 available:3 observe:1 original:2 assumes:2 wehenkel:1 medicine:1 exploit:1 gelly:1 build:1 society:1 already:1 quantity:2 strategy:6 defourny:1 link:1 rotates:1 thank:1 majority:1 seven:1 collected:2 unstable:1 trivial:1 length:1 code:3 coulom:1 nord:3 sop:13 stated:1 negative:1 implementation:1 policy:24 unknown:2 bianchi:1 upper:2 vertical:1 markov:7 benchmark:7 finite:13 acknowledge:1 ecml:1 truncated:2 immediate:2 incorporated:1 precise:1 rn:2 mansour:1 arbitrary:1 community:1 introduced:1 namely:1 pair:1 required:1 rad:1 address:2 proceeds:1 receding:1 pattern:1 regime:1 dysco:1 saturation:1 built:1 max:2 including:2 royal:1 belief:1 power:1 disturbance:2 arm:1 representing:5 mdps:7 identifies:1 disappears:1 carried:1 review:1 prior:1 acknowledgement:1 loss:1 fully:1 expect:1 generation:1 foundation:1 humanoid:2 gather:1 consistent:3 s0:57 principle:8 playing:1 heavy:1 truncation:2 bias:2 weaker:1 formal:1 face:2 munos:7 sparse:1 benefit:1 distributed:1 depth:8 transition:15 computes:2 made:4 adaptive:3 reinforcement:7 counted:1 programme:1 transaction:1 approximate:1 global:1 sequentially:2 overfitting:1 conclude:1 assumed:2 search:7 s0i:2 additionally:1 promising:2 expanding:2 actuated:1 forest:15 expansion:1 european:2 constructing:3 aistats:1 main:1 s2:1 backup:1 allowed:1 child:1 gmbh:1 augmented:1 bmbf:1 formalization:1 lie:1 jmlr:1 theorem:3 down:1 showing:1 explored:8 insignificant:1 swung:1 essential:1 workshop:1 budget:28 push:1 horizon:1 gap:1 remi:1 explore:1 forming:1 lazy:1 expressed:1 actionvalue:3 springer:1 corresponds:3 goal:2 towards:1 specifically:1 infinite:2 except:1 acting:2 lemma:3 kearns:1 called:2 total:2 rk4:1 ucb:1 busoniu:3 select:1 internal:1 support:1 |
4,390 | 4,974 | Online Learning in Episodic Markovian Decision
Processes by Relative Entropy Policy Search
Alexander Zimin
Institute of Science and Technology Austria
[email protected]
Gergely Neu
INRIA Lille ? Nord Europe
[email protected]
Abstract
We study the problem of online learning in finite episodic Markov decision processes (MDPs) where the loss function is allowed to change between episodes.
The natural performance measure in this learning problem is the regret defined as
the difference between the total loss of the best stationary policy and the total loss
suffered by the learner. We assume that the learner is given access to a finite action
space A and the state space X has a layered structure with L layers, so that state
transitions are only possible between consecutive layers. We describe a variant of
the recently proposed Relative
p Entropy Policy Search algorithm and show that its
regret
after
T
episodes
is
2
L|X ||A|T log(|X ||A|/L) in the bandit setting and
p
2L T log(|X ||A|/L) in the full information setting, given that the learner has
perfect knowledge of the transition probabilities of the underlying MDP. These
guarantees largely improve previously known results under much milder assumptions and cannot be significantly improved under general assumptions.
1
Introduction
In this paper, we study the problem of online learning in a class of finite non-stationary episodic
Markov decision processes. The learning problem that we consider can be formalized as a sequential
interaction between a learner (often called agent) and an environment, where the interaction between
the two entities proceeds in episodes. Every episode consists of multiple time steps: In every time
step of an episode, a learner has to choose one of its available actions after observing some part
of the current state of the environment. The chosen action influences the observable state of the
environment in a stochastic fashion and imposes some loss on the learner. However, the entire state
(be it observed or not) also influences the loss. The goal of the learner is to minimize its total
(non-discounted) loss that it suffers. In this work, we assume that the unobserved part of the state
evolves autonomously from the observed part of the state or the actions chosen by the learner, thus
corresponding to a state sequence generated by an oblivious adversary such as nature. Otherwise,
absolutely no statistical assumption is made about the mechanism generating the unobserved state
variables. As usual for such learning problems, we set our goal as minimizing the regret defined as
the difference between the total loss suffered by the learner and the total loss of the best stationary
state-feedback policy. This setting fuses two important paradigms of learning theory: online learning
[5] and reinforcement learning [21, 22].
The learning problem outlined above can be formalized as an online learning problem where the
actions of the learner correspond to choosing policies in a known Markovian decision process where
the loss function changes arbitrarily between episodes. This setting is a simplified version of the
Parts of this work were done while Alexander Zimin was enrolled in the MSc. programme of the Central
European University, Budapest, and Gergely Neu was working on his PhD. thesis at the Budapest University of
Technology and Economics and the MTA SZTAKI Institute for Computer Science and Control, Hungary. Both
authors would like to express their gratitude to L?aszl?o Gy?orfi for making this collaboration possible.
1
learning problem first addressed by Even-Dar et al. [8, 9], who consider online learning unichain
MDPs. In their variant of the problem, the learner faces a continuing MDP task where all policies
are assumed to generate a unique stationary distribution over the state space and losses can change
arbitrarily between consecutive time steps. Assuming that the learner observes the complete loss
function after each time step (that is, assuming full
p information feedback), they propose an algorithm
2
called MDP-E and show that its regret is O(?
T log |A|), where ? > 0 is an upper bound on the
mixing time of any policy. The core idea of MDP-E is the observation that the regret of the global
decision problem can be decomposed into regrets of simpler decision problems defined in each state.
Yu et al. [23] consider the same setting and propose an algorithm that guarantees o(T ) regret under
bandit feedback where the learner only observes the losses that it actually suffers, but not the whole
loss function. Based on the results of Even-Dar et al. [9], Neu et al. [16] propose an algorithm
that is shown to enjoy an O(T 2/3 ) bound on the regret in the bandit setting, given some further
assumptions concerning the transition structure of the underlying MDP. For the case of continuing
deterministic MDP tasks, Dekel and Hazan [7] describe an algorithm guaranteeing O(T 2/3 ) regret.
The immediate precursor of the current paper is the work of Neu et al. [14], who consider online
learning in episodic MDPs where the state space has a layered (or loop-free) structure and every
policy visits every state with a positive probability of at least ? > 0. Their analysis is based on a
decomposition similar
to the one proposed by Even-Dar et al. [9],
p
p and is sufficient to prove a regret
bound of O(L2 T |A| log |A|/?) in the bandit case and O(L2 T log |A|) in the full information
case.
In this paper, we present a learning algorithm that directly aims to minimize the global regret of the
algorithm instead of trying to minimize the local regrets in a decomposed problem. Our approach
is motivated by the insightful paper of Peters et al. [17], who propose an algorithm called Relative
Entropy Policy Search (REPS) for reinforcement learning problems. As Peters et al. [17] and Kakade
[11] point out, good performance of policy search algorithms requires that the information loss
between the consecutive policies selected by the algorithm is bounded, so that policies are only
modified in small steps. Accordingly, REPS aims to select policies that minimize the expected loss
while guaranteeing that the state-action distributions generated by the policies stay close in terms
of Kullback?Leibler divergence. Further, Daniel et al. [6] point out that REPS is closely related
to a number of previously known probabilistic policy search methods. Our paper is based on the
observation that REPS is closely related to the Proximal Point Algorithm (PPA) first proposed by
Martinet [13] (see also [20]).
We propose a variant of REPS called online REPS or O-REPS and analyze it using fundamental results concerning the PPA family. Our analysis improves all previous results concerning online
p regret of O-REPS is bounded by
p learning in episodic MDPs: we show that the expected
2 L|X ||A|T log(|X ||A|/L) in the bandit setting and 2L T log(|X ||A|/L) in the full information setting. Unlike previous works in the literature, we do not have to make any assumptions about
the transition dynamics apart from the loop-free assumption. The full discussion of our results is
deferred to Section 5.
Before we move to the technical content of the paper, we first fix some conventions. Random
variables will be typeset in boldface (e.g., x, a) and indefinite sums over states and actions are to be
understood as sums over the entire state and action spaces. For clarity, we assume that all actions
are available in all states, however, this assumption is not essential. The indicator of any event A
will be denoted by I {A}.
2
Problem definition
An episodic loop-free Markov decision process is formally defined by the tuple M = {X , A, P },
where X is the finite state space, A is the finite action space, and P : X ? X ? A is the transition
function, where P (x0 |x, a) is the probability that the next state of the Markovian environment will
be x0 , given that action a is selected in state x. We will assume that M satisfies the following
assumptions:
SL
? The state space X can be decomposed into non-intersecting layers, i.e. X = k=0 Xk
where Xl ? Xk = ? for l 6= k.
? X0 and XL are singletons, i.e. X0 = {x0 } and XL = {xL }.
2
? Transitions are possible only between consecutive layers. Formally, if P (x0 |x, a) > 0, then
x0 ? Xk+1 and x ? Xk for some 0 ? k ? L ? 1.
The interaction between the learner and the environment is described on Figure 1. The interaction
of an agent and the Markovian environment proceeds in episodes, where in each episode the agent
starts in state x0 and moves forward across the consecutive layers until it reaches state xL .1 We
T
assume that the environment selects a sequence of loss functions {`t }t=1 and the losses only change
between episodes. Furthermore, we assume that the learner only observes the losses that it suffers
in each individual state-action pair that it visits, in other words, we consider bandit feedback.2
Parameters: Markovian environment M = {X , A, P };
For all episodes t = 1, 2, . . . , T , repeat
1. The environment chooses the loss function `t : X ? A ? [0, 1].
2. The learner starts in state x0 (t) = x0 .
3. For all time steps l = 0, 1, 2, . . . , L ? 1, repeat
(a)
(b)
(c)
(d)
The learner observes xl (t) ? Xl .
Based on its previous observations (and randomness), the learner selects al (t).
The learner suffers and observes loss `t (xl (t), al (t)).
The environment draws the new state xl+1 (t) ? P (?|xl (t), al (t)).
Figure 1: The protocol of online learning in episodic MDPs.
For defining our performance measure, we need to specify a set of reference controllers that is made
available to the learner. To this end, we define the concept of (stochastic stationary) policies: A
policy is defined as a mapping ? : A ? X ? [0, 1], where ?(a|x) gives the probability of selecting
action a in state x. The expected total loss of a policy ? is defined as
#
" T L?1
XX
0
0
`t (xk , ak ) P, ? ,
LT (?) = E
t=1 k=0
where the notation E [ ?| P, ?] is used to emphasize that the random variables x0k and a0k are generated by executing ? in the MDP specified by the transition function P . Denote the total expected
b T = PT PL?1 E [ `t (xk (t), ak (t))| P ], where the expectation is
loss suffered by the learner as L
t=1
k=0
taken over the internal randomization of the learner and the random transitions of the Markovian
environment. Using these notations, we define the learner?s goal as minimizing the (total expected)
regret defined as
bT = L
b T ? min LT (?),
R
?
where the minimum is taken over the complete set of stochastic stationary policies.3
It is beneficial to introduce the concept of occupancy measures on the state-action space X ? A: the
occupancy measure q ? of policy ? is defined as the collection of distributions generated by executing
policy ? on the episodic MDP described by P :
h
i
q ? (x, a) = P x0k(x) = x, a0k(x) = a P, ? ,
where k(x) denotes the index of the layer that x belongs to. It is easy to see that the occupancy
measure of any policy ? satisfies
X
X X
q ? (x, a) =
P (x|x0 , a0 )q ? (x0 , a0 ),
(1)
a
x0 ?Xk(x)?1 a0
1
Such MDPs naturally arise in episodic decision tasks where some notion of time is present in the state
description.
2
In the literature of online combinatorial optimization, this feedback scheme is often called semi-bandit
feedback, see Audibert et al. [2].
3
The existence of this minimum is a standard result of MDP theory, see Puterman [18].
3
for all x ? X \ {x0 , xl }, with q ? (x0 , a) = ?(a|x0 ) for all a ? A. The set of all occupancy measures
satisfying the above equality in the MDP M will be denoted as ?(M ). The policy ? is said to
generate the occupancy measure q ? ?(M ) if
q(x, a)
?(a|x) = P
b q(x, b)
holds for all (x, a) ? X ? A. It is clear that there exists a unique generating policy for all measures
in ?(M ) and vice versa. The policy generating q will be denoted as ? q . In what follows, we will
redefine the task of the learner from having to select individual actions ak (t) to having to select
occupancy measures qt ? ?(M ) in each episode t. To see why this notion simplifies the treatment
of the problem, observe that
" L?1
# L?1
X
X X X
E
`t (x0k , a0k ) P, ? q =
q(x, a)`t (x, a)
k=0
k=0 x?Xk a
(2)
X
def
=
q(x, a)`t (x, a) = hq, `t i ,
x,a
where we defined the inner product h?, ?i on X ? A in the last line. Using this notation, we can
reformulate our original problem as an instance of online linear optimization with decision space
?(M ). Assuming that the learner selects occupancy measure qt in episode t, the regret can be
rewritten as
" T
#
X
b T = max E
R
hqt ? q, `t i .
q??(M )
3
t=1
The algorithm: O-REPS
Using the formalism introduced in the previous section, we now describe our algorithm called Online
Relative Entropy Policy Search (O-REPS). O-REPS is an instance of online linear optimization
methods usually referred to as Follow-the-Regularized-Leader (FTRL), Online Stochastic Mirror
Descent (OSMD) or the Proximal Point Algorithm (PPA)?see, e.g., [1], [19], [3] and [2] for a
discussion of these methods and their relations. To allow comparisons with the original derivation of
REPS by Peters et al. [17], we formalize our algorithm as an instance of PPA. Before describing the
algorithm, some more definitions are in order. First, define D (qkq 0 ) as the unnormalized Kullback?
Leibler divergence between two occupancy measures q and q 0 :
X
q(x, a) X
D (qkq 0 ) =
q(x, a) log 0
?
(q(x, a) ? q 0 (x, a)) .
q
(x,
a)
x,a
x,a
Furthermore, let R(q) define the unnormalized negative entropy of the occupancy measure q:
X
X
R(q) =
q(x, a) log q(x, a) ?
q(x, a).
x,a
x,a
We are now ready to define O-REPS formally. In the first episode, O-REPS chooses the uniform
policy with ?1 (a|x) = 1/|A| for all x and a, and we let q1 = q ?1 .4 Then, the algorithm proceeds
recursively: After observing
ut = (x0 (t), a0 (t), `t (x0 (t), a0 (t)), . . . , xL?1 (t), aL?1 (t), `t (xL?1 (t), aL?1 (t)), xL (t))
in episode t, we define the loss estimates `?t as
`t (x, a)
`?t =
I {(x, a) ? ut } ,
qt (x, a)
where we used the notation (x, a) ? ut to indicate that the state-action pair (x, a) was observed during episode t. After episode t, O-REPS selects the occupancy measure that solves the optimization
problem
n D
E
o
qt+1 = arg min ? q, `?t + D(q||qt ) .
(3)
q??(M )
4
Note that q ? can be simply computed by using (1) recursively.
4
In episode t, our algorithm follows the policy ?t = ? qt . Defining Ut = (u1 , u2 , . . . , ut ), we
clearly have that qt (x, a) = P [ (x, a) ? ut | Ut?1 ], so `?t (x, a) is an unbiased estimate of `t (x, a)
for all (x, a) such that qt (x, a) > 0:
i
h
`t (x, a)
P [ (x, a) ? ut | Ut?1 ] = `t (x, a).
E `?t (x, a) Ut?1 =
(4)
qt (x, a)
We now proceed to explain how the policy update step (3) can be implemented efficiently. It is
known (see, e.g., Bart?ok et al. [3, Lemma 8.6]) that performing this optimization can be reformulated
as first solving the unconstrained optimization problem
n D
E
o
? t+1 = arg min ? q, `?t + D(q||qt )
q
q
and then projecting the result to ?(M ) as
qt+1 = arg min D (qk?
qt+1 ) .
q??(M )
?
? t+1 (x, a) = qt (x, a)e??`t (x,a) . The projection
The first step can be simply carried out by setting q
step, however, requires more care. To describe the projection procedure, we need to introduce some
more notation. For any function v : X ? R and loss function ` : X ? A ? [0, 1] we define a
function
X
?(x, a|v, `) = ??`(x, a) ?
v(x0 )P (x0 |x, a) + v(x).
(5)
x0 ?X
As noted by Peters et al. [17], the above function can be regarded as the Bellman error corresponding
to the value function v. The next proposition provides a succinct formalization of the optimization
problem (3).
Proposition 1. Let t > 1 and define the function
X
?
Zt (v, k) =
qt (x, a)e?(x,a|v,`t ) .
x?Xk ,a?A
The update step (3) can be performed as
?
qt+1 (x, a) =
qt (x, a)e?(x,a|?vt ,`t )
,
Zt (?
vt , k(x))
where
? t = arg min
v
v
L
X
ln Zt (v, k).
(6)
k=0
Minimizing the expression on the right-hand side of Equation (6) is an unconstrained convex optimization problem (see Boyd and Vandenberghe [4] and the comments of Peters et al. [17]) and
can be solved efficiently. It is important to note that since q1 (x, a) > 0 holds for all (x, a) pairs,
qt (x, a) is also positive for all t > 0 by the multiplicative update rule, so Equation 4 holds for all
state-action pairs (x, a) in all time steps.
The proof follows the steps of Peters et al. [17], however, their original formalization of REPS is
slightly different, which results in small changes in the analysis as well. For further comments
regarding the differences between O-REPS and REPS, see Section 5.
Proof of Proposition 1. We start with formulating the projection step as a constrained optimization
problem:
X
subject to
a
X X
x?Xk
min D (qk?
qt+1 )
q
X
q(x, a) =
P (x|x0 , a0 )q(x0 , a0 )
for all x ? X \ {x0 , xl },
x0 ,a0
for all k = 0, 1, . . . , L ? 1.
q(x, a) = 1
a
5
To solve the problem, consider the Lagrangian:
?
L?1
X
X
Lt (q) =D (qk?
qt+1 ) +
?k ?
+
q(x, a) ? 1?
x?Xk ,a?A
k=0
L?1
X
?
?
X
?
X
X
x0 ?Xk?1
a0
v(x) ?
k=1 x?Xk
q(x0 , a0 )P (x|x0 , a0 ) ?
X
q(x, a)?
a
!
=D (qk?
qt+1 ) +
X
q(x0 , a) ?0 +
X
0
0
v(x )P (x |x0 , a)
?
x0
a
L?1
X
?k
k=0
!
+
XX
x6=x0
q(x, a) ?k(x) +
X
v(x0 )P (x0 |x, a) ? v(x) ,
x0
a
L?1
{?k }k=0
and {v(x)}x?X \{x0 ,xl } are Lagrange multipliers. In what follows, we set v(x0 ) =
where
v(xL ) = 0 for convenience. Differentiating the Lagrangian with respect to any q(x, a), we get
X
?Lt (q)
? t+1 (x, a) + ?k(x) +
= ln q(x, a) ? ln q
v(x0 )P (x0 |x, a) ? v(x).
?q(x, a)
0
x
Hence, setting the gradient to zero, we obtain the formula for qt+1 (x, a):
P
? t+1 (x, a)e??k(x) ?
qt+1 (x, a) = q
x0
v(x0 )P (x0 |x,a)+v(x)
.
? t+1 (x, a), we get
Substituting the formula for q
?
qt+1 (x, a) = qt (x, a)e??k(x) +?(x,a|v,`t ) .
Using the second constraint, we have for every k = 0, 1, . . . , L ? 1 that
X X
?
qt (x, a)e??k +?(x,a|v,`t ) = 1,
x?Xk
a
??k
yielding e
= 1/Zt (v, k), which leaves us with computing the value of v at the optimum. This
can be done by solving the dual problem of maximizing
X
? t+1 (x, a) ? L ?
q
x,a
L?1
X
?k
k=0
L?1
over {?k }k=0 . If we drop the constants and express each ?k in terms of Zt (v, k), then the problem
PL?1
is equivalent to maximizing ? k=0 ln Zt (v, k), that is, solving the optimization problem (6).
4
Analysis
The next theorem states our main result concerning the regret of O-REPS under bandit feedback. The
proof of the theorem is based on rather common ideas used in the analysis of FTRL/OSMD/PPAstyle algorithms (see, e.g., [24], Chapter 11 of [5], [1], [12], [2]). After proving the theorem, we also
present the regret bound for O-REPS when used in a full information setting where the learner gets
to observe `t after each episode t.
Theorem 1. Assuming bandit feedback, the total expected regret of O-REPS satisfies
b T ? ?|X ||A|T + L log
R
r
In particular, setting ? =
|X ||A|
L
?
.
|X ||A|
L
log
L
T |X ||A|
yields
r
bT ? 2
R
L|X ||A|T log
6
|X ||A|
.
L
Proof. By standard arguments (see, e.g., [19, Lemma 12], [3, Lemma 9.2] or [5, Theorem 11.1]),
we have
T D
T D
E X
E D (qkq )
X
1
? t+1 , `?t +
qt ? q, `?t ?
.
(7)
qt ? q
?
t=1
t=1
? t+1 and the fact that ex ? 1 + x, we get that
Using the exact form of q
? t+1 (x, a) ? qt (x, a) ? ?qt (x, a)`?t (x, a)
q
and thus
T D
T X
E
X
X
? t+1 , `?t ? ?
qt ? q
qt (x, a)`?2t (x, a)
t=1 x,a
t=1
??
T X
X
qt (x, a)
t=1 x,a
T X
X
`t (x, a) ?
`t (x, a) ? ?
`?t (x, a).
qt (x, a)
t=1 x,a
Combining this with (7), we get
T D
T X
E
X
X
D (qkq1 )
?
.
qt ? q, `t ? ?
`?t (x, a) +
?
t=1
t=1 x,a
(8)
Next, we take an expectation on both sides. By Equation (4), we have
" T
#
XX
E
`?t (x, a) ? |X ||A|T.
t=1 x,a
It also follows from Equation (4) that E
notice that
hD
q, `?t
Ei
D (qkq1 ) ?R(q) ? R(q1 ) ?
= hq, `t i and E
L?1
X
X X
hD
qt , `?t
Ei
q1 (x, a) log
a
k=0 x?Xk
= E [hqt , `t i]. Finally,
1
q1 (x, a)
(since R(q) ? 0)
?
L?1
X
log |Xk ||A| ? L log
k=0
|X ||A|
,
L
where we used the trivial upper bound on the entropy of distributions and Jensen?s inequality in
the last step. Plugging the above upper bound into Equation (8), we obtain the statement of the
theorem.
Theorem 2. Assuming full feedback, the total expected regret of O-REPS satisfies
b T ? ?LT + L log
R
q
In particular, setting ? =
log
|X ||A|
L
T
|X ||A|
L
?
.
yields
r
|X ||A|
b
RT ? 2L T log
.
L
The proof of the statement follows directly from the proof of Theorem 1, with the only difference
that we set `?t = `t and we can use the tighter upper bound
T
X
? t+1 , `t i ? ?
hqt ? q
T X
X
qt (x, a)`2t (x, a)
t=1 x,a
t=1
??
T X
X
qt (x, a) = ?LT,
t=1 x,a
where we used that
P
x?Xk
P
a
qt (x, a) = 1 for all layers k.
7
5
Conclusions and future work
Comparison with previous results We first compare our regret bounds with previous results from
the literature. First, our guarantees for the full information case trade
poff a factor of L present in
the bounds of Neu et al. [14] to a (usually much smaller) factor of log
p|X |. More importantly,
our bounds trade off a factor of L3/2 /? in the bandit case to a factor of |X |. This improvement
is particularly remarkable considering that we do not need to assume that ? > 0, that is, we drop
the rather unnatural assumption that every stationary policy has to visit every state with positive
probability. In particular, dropping this assumption enables our algorithm to work in deterministic
loop-free MDPs, that is, to solve the online shortest path problem (see, e.g., [10]). In the shortest
path setting, O-REPS provides an alternative implementation to the Component Hedge algorithm
analyzed by Koolen et al. [12], who prove identical bounds in the full information case. As shown
by Audibert et al. [2], Component Hedge achieves the analog of our bounds in the bandit case as
well.
O-REPS also bears close resemblance to the algorithms of Even-Dar et al. [9]P
and Neu et al. [16] who
also use policy updates of the form ?t+1 (a|x) ? ?t (a|x) exp(??`t (x, a) ? x0 P (x0 |x, a)vt (x0 )).
The most important difference between their algorithm and O-REPS is that their value functions vt
are computed as the solution of the Bellman-equations instead of the solution of the optimization
problem (6). By a simple combination of?our analysis and that of Even-Dar et al. [9], it is possible to
e ? T ) in the unichain setting with full information feedback,
show that O-REPS attains a regret of O(
improving their bound by a factor of ? 3/2 under the same assumptions. It is an interesting open
problem to find out if using the O-REPS value functions is a strictly better idea than solving the
Bellman equations in general. Another important direction of future work is to extend our results to
the case of unichain MDPs with bandit feedback and the setting where the transition probabilities of
the underlying MDP is unknown (see Neu et al. [15]).
Lower bounds Following the proof of Theorem 10 in Audibert et al. [2], it is straightforward
to construct an MDP consisting of |X |/L chains of L consecutive bandit
problems each with |A|
p
T
log(|X
||A|) in the full inactions such that no algorithm
can
achieve
smaller
regret
than
0.03L
p
formation case and 0.04 L|X ||A|T in the bandit case. These results suggest that our bounds cannot be significantly improved in general, however, finding an appropriate problem-dependent lower
bound remains an interesting open problem in the much broader field of online linear optimization.
REPS vs. O-REPS As noted several times above, our algorithm is directly inspired by the work
of Peters et al. [17]. However, there is a slight difference between the original version of REPS and
O-REPS, namely, Peters et al. aim to solve the optimization problem qt+1 = arg minq??(M ) hq, `?t i
subject to the constraint D (qkqt ) ? ? for some ? > 0. This is to be contrasted with the following
property of the occupancy measures generated by O-REPS (proved in the supplementary material):
Lemma 1. For any t > 0, D (qt kqt+1 ) ?
?2
?2
2 hqt , `t i.
In particular, if the losses are estimated by bounded sample averages as done by Peters et al. [17],
this gives D (qt kqt+1 ) ? ? 2 /2. While this is not the exact same property as desired by REPS,
both inequalities imply that the occupancy measures stay close to each other in the 1-norm sense by
Pinsker?s inequality. Thus we conjecture that our formulation of O-REPS has similar properties to
the one studied by Peters et al. [17], while it might be somewhat simpler to implement.
Acknowledgments
Alexander Zimin is an OMV scholar. Gergely Neu?s work was carried out during the tenure of
an ERCIM ?Alain Bensoussan? Fellowship Programme. The research leading to these results has
received funding from INRIA, the European Union Seventh Framework Programme (FP7/20072013) under grant agreements 246016 and 231495 (project CompLACS), the Ministry of Higher
Education and Research, Nord-Pas-de-Calais Regional Council and FEDER through the ?Contrat
de Projets Etat Region (CPER) 2007-2013?.
8
References
[1] Abernethy, J., Hazan, E., and Rakhlin, A. (2008). Competing in the dark: An efficient algorithm
for bandit linear optimization. In Proceedings of the 21st Annual Conference on Learning Theory
(COLT), pages 263?274.
[2] Audibert, J. Y., Bubeck, S., and Lugosi, G. (2013). Regret in online combinatorial optimization.
Mathematics of Operations Research. to appear.
[3] Bart?ok, G., P?al, D., Szepesv?ari, C., and Szita, I. (2011). Online learning. Lecture notes, University of Alberta. https://moodle.cs.ualberta.ca/file.php/354/notes.pdf.
[4] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press.
[5] Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA.
[6] Daniel, C., Neumann, G., and Peters, J. (2012). Hierarchical relative entropy policy search.
In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics,
volume 22 of JMLR Workshop and Conference Proceedings, pages 273?281.
[7] Dekel, O. and Hazan, E. (2013). Better rates for any adversarial deterministic mdp. In Dasgupta,
S. and McAllester, D., editors, Proceedings of the 30th International Conference on Machine
Learning (ICML-13), volume 28, pages 675?683. JMLR Workshop and Conference Proceedings.
[8] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2005). Experts in a Markov decision process.
In NIPS-17, pages 401?408.
[9] Even-Dar, E., Kakade, S. M., and Mansour, Y. (2009). Online Markov decision processes.
Mathematics of Operations Research, 34(3):726?736.
[10] Gy?orgy, A., Linder, T., Lugosi, G., and Ottucs?ak, Gy.. (2007). The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research, 8:2369?2403.
[11] Kakade, S. (2001). A natural policy gradient. In Advances in Neural Information Processing
Systems 14 (NIPS), pages 1531?1538.
[12] Koolen, W. M., Warmuth, M. K., and Kivinen, J. (2010). Hedging structured concepts. In
Proceedings of the 23rd Annual Conference on Learning Theory (COLT), pages 93?105.
[13] Martinet, B. (1970). R?egularisation d?in?equations variationnelles par approximations successives. ESAIM: Mathematical Modelling and Numerical Analysis - Mod?elisation Math?ematique
et Analyse Num?erique, 4(R3):154?158.
[14] Neu, G., Gy?orgy, A., and Szepesv?ari, Cs. (2010a). The online loop-free stochastic shortestpath problem. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), pages
231?243.
[15] Neu, G., Gy?orgy, A., and Szepesv?ari, Cs. (2012). The adversarial stochastic shortest path
problem with unknown transition probabilities. In AISTATS 2012, pages 805?813.
[16] Neu, G., Gy?orgy, A., Szepesv?ari, Cs., and Antos, A. (2010b). Online Markov decision processes under bandit feedback. In NIPS-23, pages 1804?1812. CURRAN.
[17] Peters, J., M?ulling, K., and Altun, Y. (2010). Relative entropy policy search. In AAAI 2010,
pages 1607?1612.
[18] Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience.
[19] Rakhlin, A. (2009). Lecture notes on online learning.
[20] Rockafellar, R. T. (1976). Monotone Operators and the Proximal Point Algorithm. SIAM
Journal on Control and Optimization, 14(5):877?898.
[21] Sutton, R. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press.
[22] Szepesv?ari, Cs. (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers.
[23] Yu, J. Y., Mannor, S., and Shimkin, N. (2009). Markov decision processes with arbitrary
reward processes. Mathematics of Operations Research, 34(3):737?757.
[24] Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient
ascent. In Proceedings of the Twentieth International Conference on Machine Learning, pages
928?936.
9
| 4974 |@word version:2 norm:1 dekel:2 open:2 decomposition:1 q1:5 recursively:2 ftrl:2 selecting:1 daniel:2 current:2 com:1 gmail:1 numerical:1 unichain:3 enables:1 drop:2 update:4 bart:2 stationary:7 v:1 selected:2 leaf:1 intelligence:2 warmuth:1 accordingly:1 xk:17 core:1 num:1 provides:2 math:1 mannor:1 simpler:2 mathematical:1 consists:1 prove:2 redefine:1 interscience:1 introduce:2 x0:45 expected:7 bellman:3 inspired:1 discounted:1 decomposed:3 alberta:1 precursor:1 considering:1 project:1 xx:3 underlying:3 bounded:3 notation:5 what:2 unobserved:2 finding:1 guarantee:3 every:7 control:2 grant:1 enjoy:1 appear:1 positive:3 before:2 understood:1 local:1 omv:1 sutton:1 ak:4 path:4 lugosi:3 inria:2 might:1 studied:1 unique:2 acknowledgment:1 union:1 regret:23 implement:1 procedure:1 episodic:9 significantly:2 orfi:1 projection:3 boyd:2 word:1 suggest:1 altun:1 get:5 cannot:2 close:3 layered:2 convenience:1 operator:1 influence:2 hqt:4 deterministic:3 lagrangian:2 equivalent:1 maximizing:2 zinkevich:1 straightforward:1 economics:1 minq:1 convex:3 formalized:2 rule:1 regarded:1 importantly:1 vandenberghe:2 his:1 hd:2 proving:1 notion:2 pt:1 ualberta:1 exact:2 programming:2 curran:1 agreement:1 ppa:4 pa:1 satisfying:1 osmd:2 particularly:1 observed:3 aszl:1 solved:1 region:1 martinet:2 episode:18 autonomously:1 inaction:1 trade:2 observes:5 environment:11 reward:1 pinsker:1 dynamic:2 solving:4 contrat:1 learner:26 chapter:1 derivation:1 cper:1 describe:4 artificial:2 formation:1 choosing:1 abernethy:1 supplementary:1 solve:3 otherwise:1 typeset:1 statistic:1 analyse:1 online:24 sequence:2 propose:5 interaction:4 product:1 loop:5 budapest:2 hungary:1 combining:1 mixing:1 achieve:1 description:1 optimum:1 neumann:1 generating:3 perfect:1 guaranteeing:2 executing:2 ac:1 qt:41 received:1 solves:1 implemented:1 c:5 indicate:1 convention:1 direction:1 closely:2 stochastic:7 a0k:3 mcallester:1 material:1 education:1 fix:1 scholar:1 randomization:1 proposition:3 tighter:1 strictly:1 pl:2 hold:3 exp:1 claypool:1 mapping:1 substituting:1 achieves:1 consecutive:6 ematique:1 combinatorial:2 calais:1 council:1 vice:1 mit:1 clearly:1 aim:3 modified:1 rather:2 barto:1 broader:1 improvement:1 modelling:1 adversarial:2 attains:1 sense:1 milder:1 dependent:1 entire:2 bt:2 a0:11 bandit:16 relation:1 selects:4 arg:5 dual:1 colt:3 szita:1 denoted:3 constrained:1 field:1 construct:1 having:2 identical:1 lille:1 yu:2 icml:1 future:2 oblivious:1 divergence:2 individual:2 consisting:1 deferred:1 analyzed:1 yielding:1 antos:1 chain:1 tuple:1 partial:1 continuing:2 desired:1 instance:3 formalism:1 markovian:6 uniform:1 seventh:1 proximal:3 chooses:2 st:1 fundamental:1 international:3 siam:1 stay:2 probabilistic:1 off:1 complacs:1 synthesis:1 intersecting:1 gergely:4 thesis:1 aaai:1 central:1 cesa:1 choose:1 expert:1 leading:1 sztaki:1 singleton:1 de:2 gy:6 rockafellar:1 audibert:4 hedging:1 performed:1 multiplicative:1 observing:2 hazan:3 analyze:1 start:3 minimize:4 php:1 qk:4 largely:1 who:5 efficiently:2 correspond:1 yield:2 etat:1 monitoring:1 randomness:1 explain:1 reach:1 suffers:4 neu:12 definition:2 infinitesimal:1 shimkin:1 naturally:1 proof:7 proved:1 treatment:1 austria:1 knowledge:1 ut:10 improves:1 formalize:1 actually:1 ok:2 higher:1 follow:1 x6:1 specify:1 improved:2 qkq:3 formulation:1 done:3 furthermore:2 msc:1 until:1 working:1 hand:1 ei:2 resemblance:1 mdp:13 usa:1 concept:3 unbiased:1 multiplier:1 equality:1 hence:1 leibler:2 puterman:2 during:2 game:1 noted:2 unnormalized:2 generalized:1 trying:1 pdf:1 complete:2 recently:1 funding:1 ari:5 common:1 koolen:2 volume:2 analog:1 extend:1 slight:1 versa:1 cambridge:2 rd:2 unconstrained:2 outlined:1 mathematics:3 l3:1 access:1 europe:1 belongs:1 apart:1 inequality:3 rep:34 arbitrarily:2 vt:4 morgan:1 minimum:2 ministry:1 care:1 somewhat:1 paradigm:1 shortest:4 semi:1 full:11 multiple:1 technical:1 concerning:4 visit:3 plugging:1 prediction:1 variant:3 controller:1 expectation:2 fifteenth:1 szepesv:5 fellowship:1 addressed:1 suffered:3 publisher:1 unlike:1 regional:1 file:1 comment:2 subject:2 ascent:1 mod:1 easy:1 competing:1 inner:1 idea:3 simplifies:1 regarding:1 motivated:1 expression:1 feder:1 unnatural:1 peter:12 reformulated:1 proceed:1 york:1 action:17 dar:7 clear:1 enrolled:1 dark:1 elisation:1 generate:2 http:1 sl:1 notice:1 estimated:1 discrete:1 dropping:1 dasgupta:1 express:2 ist:1 indefinite:1 clarity:1 fuse:1 monotone:1 sum:2 family:1 draw:1 decision:14 layer:7 bound:16 def:1 annual:3 constraint:2 u1:1 argument:1 min:6 formulating:1 performing:1 conjecture:1 structured:1 mta:1 combination:1 across:1 beneficial:1 slightly:1 smaller:2 kakade:4 evolves:1 making:1 projecting:1 taken:2 ln:4 equation:8 previously:2 remains:1 describing:1 r3:1 mechanism:1 fp7:1 end:1 available:3 operation:3 rewritten:1 observe:2 hierarchical:1 appropriate:1 zimin:4 alternative:1 existence:1 original:4 denotes:1 move:2 rt:1 usual:1 said:1 gradient:3 hq:3 entity:1 trivial:1 boldface:1 ottucs:1 assuming:5 index:1 reformulate:1 minimizing:3 statement:2 nord:2 negative:1 implementation:1 zt:6 policy:34 unknown:2 bianchi:1 upper:4 observation:3 markov:8 poff:1 finite:5 descent:1 projets:1 immediate:1 defining:2 mansour:2 arbitrary:1 gratitude:1 introduced:1 pair:4 namely:1 specified:1 nip:3 adversary:1 proceeds:3 usually:2 kqt:2 max:1 event:1 natural:2 regularized:1 indicator:1 kivinen:1 occupancy:12 improve:1 scheme:1 technology:2 mdps:8 imply:1 esaim:1 ready:1 carried:2 literature:3 l2:2 tenure:1 ulling:1 relative:6 x0k:3 loss:25 lecture:3 bear:1 par:1 interesting:2 remarkable:1 agent:3 sufficient:1 imposes:1 editor:1 collaboration:1 repeat:2 last:2 free:5 alain:1 side:2 allow:1 institute:2 face:1 differentiating:1 feedback:12 transition:10 author:1 made:2 reinforcement:4 forward:1 simplified:1 collection:1 programme:3 observable:1 emphasize:1 kullback:2 global:2 assumed:1 leader:1 search:8 why:1 nature:1 ca:1 improving:1 orgy:4 european:2 protocol:1 aistats:1 main:1 whole:1 arise:1 succinct:1 allowed:1 referred:1 fashion:1 ny:1 wiley:1 formalization:2 xl:17 jmlr:2 formula:2 theorem:9 insightful:1 jensen:1 rakhlin:2 essential:1 exists:1 workshop:2 sequential:1 mirror:1 phd:1 entropy:8 lt:6 simply:2 twentieth:1 bubeck:1 lagrange:1 u2:1 satisfies:4 hedge:2 goal:3 content:1 change:5 contrasted:1 lemma:4 total:10 called:6 select:3 formally:3 linder:1 internal:1 alexander:4 absolutely:1 ex:1 |
4,391 | 4,975 | Online Learning in Markov Decision Processes with
Adversarially Chosen Transition Probability
Distributions
Peter L. Bartlett
UC Berkeley and QUT
[email protected]
Yasin Abbasi-Yadkori
Queensland University of Technology
[email protected]
Varun Kanade
UC Berkeley
[email protected]
Yevgeny Seldin
Queensland University of Technology
[email protected]
Csaba Szepesv?ari
University of Alberta
[email protected]
Abstract
We study the problem of online learning Markov Decision Processes (MDPs)
when both the transition distributions and loss functions are chosen by an adversary.
We present an algorithm that, under a mixing assumption, achieves
p
O( T log |?| + log |?|) regret with respect to a comparison set of policies ?.
The regret is independent of the size of the state and action spaces. When expectations over sample paths can be computed efficiently and the comparison set ?
has polynomial size, this algorithm is efficient.
We also consider the episodic adversarial online shortest path problem. Here, in
each episode an adversary may choose a weighted directed acyclic graph with an
identified start and finish node. The goal of the learning algorithm is to choose a
path that minimizes the loss while traversing from the start to finish node. At the
end of each episode the loss function (given by weights on the edges) is revealed
to the learning algorithm. The goal is to minimize regret with respect to a fixed
policy for selecting paths. This problem is a special case of the online MDP
problem. It was shown that for randomly chosen graphs and adversarial losses,
the problem can be efficiently solved. We show that it also can be efficiently
solved for adversarial graphs and randomly chosen losses. When both graphs and
losses are adversarially chosen, we show that designing efficient algorithms for
the adversarial online shortest path problem (and hence for the adversarial MDP
problem) is as hard as learning parity with noise, a notoriously difficult problem
that has been used to design efficient cryptographic schemes. Finally, we present
an efficient algorithm whose regret scales linearly with the number of distinct
graphs.
1
Introduction
In many sequential decision problems, the transition dynamics can change with time. For example,
in steering a vehicle, the state of the vehicle is determined by the actions taken by the driver, but
also by external factors, such as terrain and weather conditions. As another example, the state of a
1
robot that moves in a room is determined both by its actions and by how people in the room interact
with it. The robot might not have influence over these external factors, or it might be very difficult
to model them. Other examples occur in portfolio optimization, clinical trials, and two player games
such as poker.
We consider the problem of online learning Markov Decision Processes (MDPs) when the transition
probability distributions and loss functions are chosen adversarially and are allowed to change with
time. We study the following game between a learner and an adversary:
1. The (oblivious) adversary chooses a sequence of transition kernels mt and loss functions `t .
2. At time t:
(a) The learner observes the state xt in state space X and chooses an action at in the action
space A.
(b) The new state xt+1 ? X is drawn at random according to distribution mt (?|xt , at ).
(c) The learner observes the transition kernel mt and the loss function `t , and suffers the loss
`t (xt , at ).
To handle the case when the representation of mt or `t is very large, we assume that the learner has
a black-box access to mt and `t . The above game is played for a total of T rounds and the total
PT
loss suffered by the learner is t=1 `t (xt , at ). In the absence of state variables, the MDP problem
reduces to a full information online learning problem (Cesa-Bianchi and Lugosi [1]). The difficulty
with MDP problems is that, unlike full information online learning problems, the choice of a policy
at each round changes the future states and losses.
A policy is a mapping ? : X ? ?A , where ?A denotes the set of distributions over A. To evaluate
the learner?s performance, we imagine a hypothetical game where at each round the action played is
chosen according to a fixed policy ?, and the transition kernels mt and loss functions `t are the same
as those chosen by the oblivious adversary. Let (x?t , a?t ) denote a sequence of state and action pairs
PT
in this game. Then the loss of the policy ? is t=1 `t (x?t , a?t ). Define a set ? of policies that will be
used as a benchmark to evaluate the learner?s performance. The regret of a learner A with respect to
PT
PT
a policy ? ? ? is defined as the random variable RT (A, ?) = t=1 `t (xt , at ) ? t=1 `t (x?t , a?t ).
The goal in adversarial online learning is to design learning algorithms for which the regret with
respect to any policy grows sublinearly with T , the total number of rounds played. Algorithms with
such a guarantee, somewhat unfortunately, are typically termed no-regret algorithms.
We also study a special case of this problem: the episodic online adversarial shortest path problem.
Here, in each episode the adversary chooses a layered directed acyclic graph with a unique start and
finish node. The adversary also chooses a loss function, i.e., a weight for every edge in the graph.
The goal of the learning algorithm is to choose a path from start to finish that minimizes the total
loss. The loss along any path is simply the sum of the weights on the edges. At the end of the round
the graph and the loss function are revealed to the learner. The goal, as in the case of the online
MDP problem, is to minimize regret with respect to a class of policies for choosing the path. Note
that the online shortest path problem is a special case of the online MDP problem; the states are the
nodes in the graph and the transition dynamics is specified by the edges.
1.1
Related Work
Burnetas and Katehakis [2], Jaksch et al. [3], and Bartlett and Tewari [4] propose efficient algorithms
for finite MDP problems with stochastic transitions and loss functions. These results are extended
to MDPs with large state and action spaces ?
in [5, 6, 7]. Abbasi-Yadkori and Szepesv?ari [5] and
Abbasi-Yadkori [6] derive algorithms with O( T ) regret for linearly parameterized MDP problems,
while Ortner and Ryabko [7] derive O(T (2d+1)/(2d+2) ) regret bounds under a Lipschitz assumption,
where d is the dimensionality of the state space. We note that these algorithms are computationally
expensive.
Even-Dar et al. [8] consider the problem of online learning MDPs with fixed and known dynamics,
but adversarially changing loss functions. They show that when the transition
kernel satisfies a
?
mixing condition (see Section 3), there is an algorithm with regret bound O( T ). Yu and Mannor [9,
10] study a harder setting, where the transition dynamics may also change adversarially over time.
2
However, their regret bound scales with the amount of variation in the transition kernels and in the
worst case may grow linearly with time.
Recently, Neu et al. [11] give a no-regret algorithm for the episodic shortest path problem with
adversarial losses but stochastic transition dynamics.
1.2
Our Contributions
First, we study a general MDP problem with large (possibly continuous) state and action spaces
and?adversarially changing dynamics and loss functions. We present an algorithm that guarantees
O( T ) regret with respect to a suitably small (totally bounded) class of policies ? for this online
MDP problem. The regret grows with the metric entropy of ?, so that if the comparison class is
the set of all policies (that is, the algorithm must compete with the optimal fixed policy), it scales
polynomially with the size of the state and action spaces. The above algorithm is efficient as long
as the comparison class has polynomial size and we can compute expectations over sample paths
for each policy. This result has several advantages over the results of [5, 6, 7]. First, the transition
distributions and loss functions are chosen adversarially. Second, by designing an appropriate small
class of comparison policies, the algorithm is efficient, even in the face of very large state and action
spaces.
Next, we present efficient no-regret algorithms for the episodic online shortest path problem for two
cases: when the graphs and loss functions (edge weights) are chosen adversarially and the set of
graphs is small; and when the graphs are chosen adversarially, but the loss is stochastic.
Finally, we show that for the general adversarial online shortest path problem, designing an efficient
no-regret algorithm is at least as hard as learning parity with noise. Since the online shortest path
problem is a special case of online MDP problem, the hardness result is also applicable there.1 The
noisy parity problem is widely believed to be computationally intractable, and has been used to
design cryptographic schemes.
Organization: In Section 3 we introduce an algorithm for MDP problems with adversarially chosen
transition kernels and loss functions. Section 4 discusses how this algorithm can also be applied to
the online episodic shortest path problem with adversarially varying graphs and loss functions and
also considers the case of stochastic loss functions. Finally, in Section 4.2, we show the reduction
from the adversarial online epsiodic shortest path problem to learning parity with noise.
2
Notations
Let X ? Rn be a state space and A ? Rd be an action space. Let ?S be the space of probability
distributions over a set S. Define a policy ? as a mapping ? : X ? ?A . We use ?(a|x) to
denote the probability of choosing an action a in state x under policy ?. A random action under
policy ? is denoted by ?(x). A transition probability kernel (or transition kernel) m is a mapping
m : X ? A ? ?X . For finite X , let P (?, m) be the transition probability matrix of policy ? under
transition kernel m. A loss function is a boundedP
real-valued function over state and action spaces,
` : X ? A ? R. For a vector v, define
P kvk1 = i |vi |. For a real-valued function f defined over
X ? A, define kf k?,1 = maxx?X a?A |f (x, a)|. The inner product between two vectors v and
w is denoted by hv, wi.
3
Online MDP Problems
In this section, we study a general MDP problem with large state and action spaces. The adversary
can change the dynamics and the loss functions, but is restricted to choose dynamics that satisfy a
mixing condition.
Assumption A1 Uniform Mixing There exists a constant ? > 0 such that for all distributions
d and d0 over the state space, any deterministic policy ?, and any transition kernel m ? M ,
kdP (?, m) ? d0 P (?, m)k1 ? e?1/? kd ? d0 k1 .
1
There was an error in the proof of a claimed hardness result for the online adversarial MDP problem [8];
this claim has since been retracted [12, 13].
3
p
For all policies ? ? ?, w?,0 = 1. ? = min{ log |?| /T , 1/2}.
Choose ?1 uniformly at random.
for t := 1, 2, . . . , T do
Learner takes the action at ? ?t (.|xt ) and adversary chooses mt and `t .
Learner suffers loss `t (xt , at ) and observes mt and `t . Update state: xt+1 ? mt (.|xt , at ).
?
For allP
policies ?, w?,t = w?,t?1 (1 ? ?)E[`t (xt ,?)] .
Wt = ??? w?,t . For any ?, p?,t+1 = w?,t /Wt .
With probability ?t = w?t ,t /w?t ,t?1 choose the previous policy, ?t+1 = ?t , while with
probability 1 ? ?t , choose ?t+1 based on the distribution p?,t+1 .
end for
Figure 1: OMDP: The Online Algorithm for Markov Decision Processes
This assumption excludes deterministic MDPs that can be more difficult to deal with. As discussed
by Neu et al. [14], if Assumption A1 holds for deterministic policies, then it holds for all policies.
We propose an exponentially-weighted average algorithm for this problem. The algorithm, called
OMDP and shown in Figure 1, maintains a distribution over the policy class, but changes its policy
with a small probability. The main results of this section are the following regret bounds for the
OMDP algorithm. The proofs can be found in Appendix A.
Theorem 1. Let the loss functions selected by the adversary be bounded in [0, 1], and the transition
kernels selected by the adversary satisfy Assumption A1. Then, for any policy ? ? ?,
p
E [RT (OMDP, ?)] ? (4 + 2? 2 ) T log |?| + log |?| .
Corollary 2. Let ? be an arbitrary policy space, N () be the -covering number of the metric space
(?, k.k?,1 ), and C() be an -cover. Assume that we run the OMDP algorithm on C(). Then, under
the same assumptions as in Theorem 1, for any policy ? ? ?,
p
E [RT (OMDP, ?)] ? (4 + 2? 2 ) T log N () + log N () + ? T .
Remark 3. If we choose ? to be the space of deterministic policies
p and X and A are finite spaces,
from Theorem 1 we obtain that E [RT (OMDP, ?)] ? (4 + 2? 2 ) T |X | log |A| + |X | log |A|. This
result, however, is not sufficient to show that the average regret with respect to the optimal stationary
policy converges to zero. This is because, unlike in the standard MDP framework, the optimal
stationary policy is not necessarily deterministic. Corollary 2 extends the result of Theorem 1 to
continuous policy spaces.
|A||X |
In particular, if X and A are finite spaces and ? is the space of
,
qall policies, N () ? (|A|/)
|A|
so the expected regret satisfies E [RT (OMDP, ?)] ? (4+2? 2 ) T |A||X | log |A|
+|A||X | log +
p
? T . By the choice of = T1 , we get that E [RT (OMDP, ?)] = O(? 2 T |A| |X | log(|A|T )).
3.1
Proof Sketch
The main idea behind the design and the analysis of our algorithm is the following regret decomposition:
RT (A, ?) =
T
X
`t (xA
t , at ) ?
t=1
T
X
`t (x?t t , ?t ) +
t=1
T
X
`t (x?t t , ?t ) ?
t=1
T
X
`t (x?t , ?) ,
(1)
t=1
where A is an online learning algorithm that generates a policy ?t at round t, xA
t is the state at round
t if we have followed the policies generated by algorithm A, and `(x, ?) = `(x, ?(x)). Let
BT (A) =
T
X
t=1
`t (xA
t , at ) ?
T
X
`t (x?t t , ?t ) ,
CT (A, ?) =
t=1
T
X
t=1
4
`t (x?t t , ?t ) ?
T
X
t=1
`t (x?t , ?) .
Note that the choice of policies has no influence over future losses in CT (A, ?). Thus, CT (A, ?)
can be bounded by a reduction to full information online learning algorithms. Also, notice that the
competitor policy ? does not appear in BT (A). In fact, BT (A) depends only on the algorithm A.
We will show that if the class of transition kernels satisfies Assumption A1 and algorithm A rarely
changes its policies, then BT (A) can be bounded by a sublinear term. To be more precise, let ?t
be the probability that algorithm A changes its policy at round t. We will require that there exists
a constant D such that
? for any 1 ? t ? T , any sequence of models m1 , . . . , mt and loss functions
`1 , . . . , `t , ?t ? D/ t.
We would like to have a full information online learning algorithm that rarely changes its policy.
The first candidate that we consider is the well-known Exponentially Weighted Average (EWA)
algorithm [15, 16]. In our MDP problem, the EWA algorithm
chooses a policy ? ? ? accord
Pt?1
ing to distribution qt (?) ? exp ?? s=1 E [`s (x?s , ?)] for some ? > 0. The policies that this
EWA algorithm generates most likely are different in consecutive rounds and thus, the EWA algorithm might change its policy frequently. However, a variant of EWA, called Shrinking Dartboard
(SD) (Guelen et al. [17]), rarely changes its policy (see Lemma 8 in Appendix A). The OMDP algorithm is based on the SD algorithm. Note that the algorithm needs to know the number of rounds,
T , in advance. Also note that we could use any rarely switching algorithm such as Follow the Lazy
Leader of Kalai and Vempala [18] as the subroutine.
4
Adversarial Online Shortest Path Problem
We consider the following adversarial online shortest path problem with changing graphs. The
problem is a repeated game played between a decision-maker and an (oblivious) adversary over
T rounds. At each round t the adversary presents a directed acyclic graph gt on n nodes to the
decision maker, with L layers indexed by {1, . . . , L} and a special start and finish node. Each layer
contains a fixed set of nodes and has connections only with the next layer. 2 The decision-maker
must choose a path pt from the start to the finish node. Then, the adversary reveals weights across
all the edges of the graph. The loss `t (gt , pt ) of the decision-maker is the weight along the path that
the decision-maker took on that round.
Denote by [k] the set {1, 2, . . . , k}. A policy is a mapping ? : [n] ? [n]. Each policy may be
interpreted as giving a start to finish path. Suppose that the start node is s ? [n], then ?(i) gives the
subsequent node. The path is interpreted as follows : if at a node v, the edge (v, ?(v)) exists then the
next node is ?(v). Otherwise, the next node is an arbitrary (pre-determined) choice that is adjacent
to v. We compete against the class of such policies for choosing the shortest path. Denote the class
of such policies by ?. The regret of a decision-maker A with respect to a policy ? ? ? is defined as:
PT
PT
RT (A, ?) = t=1 `t (gt , pt ) ? t=1 `t (gt , ?(gt )), where ?(gt ) is the path obtained by following
the policy ? starting at the source node. Note that it is possible that there exists no policy that would
result in an actual path that leads to the sink for some graph. In this case we say that the loss of the
policy is infinite. Thus, there may be adversarially chosen sequences of graphs for which the regret
of a decision-maker is not well-defined. This can be easily corrected by the adversary ensuring that
the graph always has some fixed set of edges which result in a (possibly high loss) s ? f path.
In fact, we show that the adversary can choose a sequence of graphs and loss functions that make
this problem at least as hard as learning noisy parities. Learning noisy parities is a notoriously hard
problem in computational learning theory. The best known algorithm runs in time 2O(n/ log(n)) [20]
and the presumed hardness of this and related problems has been used for designing cryptographic
protocols [21].
Interestingly, for the hardness result to hold, it is essential that the adversary have the ability to
control both the sequence of graphs and losses. The problem is well-understood when the graphs
are generated randomly and the losses are adversarial. Jaksch et al. [3] and Bartlett and Tewari [4]
propose efficient algorithms for problems with stochastic losses.3 Neu et al. [22] extend these results
to problems with adversarial loss functions.
2
As noted by Neu et al. [19], any directed acyclic graph can be transformed into a graph that satisfies our
assumptions.
3
These algorithms are originally proposed for continuing problems, but we can use them in shortest path
problems with small modifications.
5
One can also ask what happens in the case when the graphs are chosen by the adversary, but the
weight of each edge is drawn at random according to a fixed stationary distribution. In this setting,
we show a reduction to bandit linear optimization. Thus, in fact, that algorithm does not need to see
the weights of all edges at the end of the round, but only needs to know the loss it suffered.
Finally, we consider the case when both graphs and losses are chosen adversarially. Although the
general problem is at least as hard as learning noisy parities, we give an efficient algorithm whose
regret scales linearly with the number of different graphs. Thus, if the adversary is forced to choose
graphs from some small set G, then we have an efficient algorithm for solving the problem. We note
that in fact, our algorithm does not?need to see the graph gt at the beginning of the round, in which
case an algorithm achieving O(|G| T ) may be trivially obtained.
4.1
Stochastic Loss Functions and Adversarial Graphs
Consider the case when the weight of each edge is chosen from a fixed distribution. Then it is easy
to see that the expected loss of any path is a fixed linear function of the expected weights vector. The
set of available paths depends on the graph and it may change from time to time. This is an instance
of stochastic linear bandit problem, for which efficient algorithms exist [23, 24, 25].
Theorem 4. Let us represent each path by a binary vector of length n(n ? 1)/2, such that the ith
element is 1 only if the corresponding edge is present in the path. Assume that the learner suffers
the loss of c(p) for choosing path p, where E [c(p)] = h`, pi and the loss vector ` ? Rn(n?1)/2 is
fixed. Let Pt be the set of paths in a graph gt . Consider the C ONFIDENCE BALL1 algorithm of Dani
et al. [24] applied to the shortest path problem with a changing action
? set Pt and the loss function
`. Then the regret with respect to the best path in each round is Cn3 T for a problem-independent
constant C.
Pt?1
Let `bt be the least squares estimate of ` at round t, Vt = s=1 ps p>
s be the covariance matrix, and
Pt be the decision set at round t. The
C
ONFIDENCE
B
ALL
algorithm
constructs a high probability
1
o
n
1/2
b
norm-1 ball confidence set, Ct = ` :
Vt (` ? `t )
? ?t for an appropriate ?t , and chooses
1
an action pt according to pt = argmin`?Ct ,p?Pt h`, pi. Dani et al. [24] prove that the regret of the
?
C ONFIDENCE BALL1 algorithm is bounded by O(m3/2 T ), where m is the dimensionality of the
action set (in our case m = n(n ? 1)/2). The above optimization can be solved efficiently, because
only 2n corners of Ct need to be evaluated.
Note that the regret in Theorem 4 is with respect to the best path in each round, which is a stronger
result than competing with a fixed policy.
4.2
Hardness Result
In this section, we show that the setting when both the graphs and the losses are chosen by an
adversary, the problem is at least as hard as the noisy parity problem. We consider the online agnostic
parity learning problem. Recall that the class of parity function over {0, 1}n is the following: For
S ? [n], PARS (x) = ?i?S xi , where ? denotes modulo 2 addition. The class is PARITIES =
{PARS | S ? [n]}. In the online setting, the learning algorithm is given xt ? {0, 1}n , the learning
algorithm then picks y?t ? {0, 1}, and then the true label y t is revealed. The learning algorithm
suffers loss I(?
y t 6= y t ). The regret of the learning algorithm with respect to PARITIES is defined
PT
PT
y t 6= y t ) ? minPARS ?PARITIES t=1 I(PARS (xt ) 6= y t ). The goal is to
as: Regret =
t=1 I(?
design a learning algorithm that runs in time polynomial in n, T and suffers regret O(poly(n)T 1?? )
for some constant ? > 0. It follows from prior work that online agnostic learning of parities is at
least as hard as the offline version (see Littlestone [26], Kanade and Steinke [27]). As mentioned
previously, the agnostic parity learning problem is notoriously difficult. Thus, it seems unlikely that
a computationally efficient no-regret algorithm for this problem exists.
Theorem 5. Suppose there is a no-regret algorithm for the online adversarial shortest path problem that runs in time poly(n, T ) and achieves regret O(poly(n)T 1?? ) for any constant ? > 0.
Then there is a polynomial-time algorithm for online agnostic parity learning that achieves regret
O(poly(n)T 1?? ). By the online to batch reduction, this would imply a polynomial time algorithm
for agnostically learning parities.
6
1a
2a
2b
3a
3b
4a
4b
5a
5b
6a
6b
y
?
1?y
for t := 1, 2, . . . do
Adversary chooses a graph gt ? G
for l = 1, . . . , L do
Initialize an EWA expert algorithm, E
for s = 1, . . . , t ? 1 do
if gs ? C(xt,l ) then
Feed expert E with the value function Qs =
Q?s ,gs ,cs
end if
end for
Let ?t (.|xt,l ) be the distribution over actions of the expert
E
Take the action at,l ? ?t (.|xt,l ), suffer the loss
ct (nt,l , at,l ), and move to the node nt,l+1 = gt (nt,l , at,l )
end for
Learner observes the graph gt and the loss function ct
Compute the value function Qt = Q?t ,gt ,ct for all nodes
n0 ? [n]
end for
(a)
(b)
Figure 2: (a) Encoding the example (1, 0, 1, 0, 1) ? {0, 1}5 as a graph. (b) Improved Algorithm for
the Online Shortest Path Problem.
Proof. We first show how to map a point (x, y) to a graph and a loss function. Let (x, y) ? {0, 1}n ?
{0, 1}. We define a graph, g(x) and a loss function `x,y associated with (x, y). Define a graph on
2n + 2 nodes ? named 1a , 2a , 2b , 3a , 3b , . . . , na , nb , (n + 1)a , (n + 1)b , ? in that order. Let E(x)
denote the set of edges of g(x). The set E(x) contains the following edges:
(i) If x1 = 1, both (1a , 2a ) and (1a , 2b ) are in E(x), else if x1 = 0, only (1a , 2a ) is present.
(ii) For 1 < i ? n, if xi = 1, the edges (ia , (i + 1)a ), (ia , (i + 1)b ), (ib , (i + 1)a ), (ib , (i + 1)b )
are all present; if xi = 0 only the two edges (ia , (i + 1)a ) and (ib , (i + 1)b ) are present.
(iii) The two edges ((n + 1)a , ?) and ((n + 1)b , ?) are always present.
For the loss function, define the weights as follows. The weight of the edge ((n + 1)a , ?) is y;
the weight of the edge ((n + 1)b , ?) is 1 ? y. The weights of all the remaining edges are set to 0.
Figure 2(a) shows the encoding of the example (1, 0, 1, 0, 1) ? {0, 1}5 .
Suppose an algorithm with the stated regret bound for the online shortest path problem exists, call
it U. We will use this algorithm to solve the online parity learning problem. Let xt be an example
received; then pass the graph g(xt ) to the algorithm U. The start vertex is 1a and the finish vertex
is ?. Suppose the path pt chosen by U reaches ? using the edge ((n + 1)a , ?) then set y?t to be 0.
Otherwise, choose y?t = 1.
Thus, in effect we are using algorithm U as a meta-algorithm for the online agnostic parity learning
problem. First, it is easy to check that the loss suffered by the meta-algorithm on the parity problem
is exactly the same as the loss of U on the online shortest path problem. This follows directly from
the definition of the losses on the edges.
Next, we claim that for any S ? [n], there is a policy ?S that achieves the same loss (on the online
shortest path problem) as the parity PARS does (on the parity learning problem). The policy is as
follows:
(i) From node ia , if i ? S and (ia , (i + 1)b ) ? E(g t ), go to (i + 1)b , otherwise go to (i + 1)a .
(ii) From node ib , if i ? S and (ib , (i + 1)a ) ? E(g t ), go to (i + 1)a , otherwise go to (i + 1)b .
(iii) Finally, from either (n + 1)a or (n + 1)b , just move to ?.
We can think of the path pt as being in type a nodes or type b nodes. For each i ? S, such that
xti = 1, the path pt switches types. Thus, if PARS (xt ) = 1, pt reaches ? via the edge ((n + 1)b , ?)
and if PARS (xt ) = 0, pt reaches ? via the edge ((n + 1)a , ?). Recall that the loss function is
7
defined as follows: weight of the edge ((n + 1)a , ?) is y t , weight of the edge ((n + 1)b , ?) is
1 ? y t ; other edges have loss 0. Thus, the loss suffered by the policy ?S is 1 if PARS (xt ) 6= y t
and 0 otherwise. This is exactly the loss of the parity function PARS on the agnostic parity learning
problem. Thus, if the algorithm U has regret O(poly(n), T 1?? ), then the meta-algorithm for the
online agnostic parity learning problem also has regret O(poly(n), T 1?? ).
Remark 6. We observe that the online shortest path problem is a special case of online MDP
learning. Thus, the above reduction also shows that, short of a major breakthrough, it is unlikely
that there exists a computationally efficient algorithm for the fully adversarial online MDP problem.
4.3
Small Number of Graphs
?
In this section, we design an efficient algorithm and prove a O(|G| T ) regret bound, where G is the
set of graphs played by the adversary up to round T . The computational complexity of the algorithm
is O(L2 t) at round t. The algorithm does not need to know the set G or |G|. This regret bound holds
even if the graphs are revealed at the end of the rounds. Notice that ?
if the graphs are shown at the
beginning of the rounds, obtaining regret bounds that scale like O(|G| T ) is trivial; the learner only
needs to run |G| copies of the MDP-E algorithm of Even-Dar et al. [12], one for each graph.
Let n?t,l denote the node at layer l of round t if we run policy ?. Let ct (n0 , a) be the loss incurred
for taking action a in node n0 at round t.4 We construct a new graph, called G, as follows: graph G
also has a layered structure with the same number of layers, L. At each layer, we have a number of
states that represent all possible observations that we might have upon arriving at that layer. Thus,
a state at layer l has the form of x = (s, a0 , n1 , a1 , . . . , nl?1 , al?1 , nl ), where ni belongs to layer i
and ai ? A.
Let X be the set of states in G and Xl be the set of states in layer l of G. For (x, a) ? X ? A,
let c(x, a) = c(n(x), a), where n(x) is the last node observed in state x. Let g(n0 , a) be the next
0
node
P under graph g if we take action a in node n . Let g(x, a) = g(n(x), a). Let c(x, ?) =
a ?(a|x)c(x, a). For a graph g and a loss function `, define the value functions by
?n0 ? [n],
?x, s.t. g ? C(x),
Q?,g,c (n0 , ? 0 ) = Ea??0 (n0 ) [c(n0 , a) + Q?,g,c (g(n0 , a), ?)] ,
Q?,g,c (x, ? 0 ) = Q?,g,c (n(x), ? 0 ) ,
with Q?,g,c (f, a) = 0 for any ?, g, c, a where f is the finish node. Let Qt = Q?t ,gt ,ct denote the
value function associated with policy ?t at time t. For x = (s, a0 , n1 , a1 , . . . , nl?1 , al?1 , nl ), define
C(x) = {g ? G : n1 = g(s, a0 ), . . . , nl = g(nl?1 , al?1 )}, the set of graphs that are consistent
with the state x.
We can use the MDP-E algorithm to generate policies. The algorithm, however, is computationally
expensive as it updates a large set of experts at each round. Notice that the number of states at stage l,
|Xl |, can be exponential in the number of graphs. We show a modification of the MDP-E algorithm
that would generate the same sequence of policies, with the advantage that the new algorithm is
computationally efficient. The algorithm is shown in Figure 2(b). As the generated policies are
always the same, the regret bound in the next theorem, that is proven for the MDP-E algorithm, also
applies to the new algorithm. The proof can be found in Appendix B.
p
Theorem 7. For q
any policy ?, E [RT (MDP-E, ?)]
?
2L 8T log(2T ) +
L min{|G| , maxl |Xl |} T log |A|
2 + 2L.
?
The theorem gives a sublinear regret as long as |G| = o( T ). On the other hand, the hardness
result in Theorem 5 applies when |G| = ?(T ). Characterizing regret vs. computational complexity
tradeoffs when |G| is in between remains for future work.
References
[1] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
New York, NY, USA, 2006.
P
4
?
Thus, `t (Gt , ?(Gt )) = L
l=1 ct (nt,l , ?).
8
[2] Apostolos N. Burnetas and Michael N. Katehakis. Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1):222?255, 1997.
[3] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. Journal of
Machine Learning Research, 11:1563?1600, 2010.
[4] P. L. Bartlett and A. Tewari. REGAL: A regularization based algorithm for reinforcement learning in
weakly communicating MDPs. In UAI, 2009.
[5] Yasin Abbasi-Yadkori and Csaba Szepesv?ari. Regret bounds for the adaptive control of linear quadratic
systems. In COLT, 2011.
[6] Yasin Abbasi-Yadkori. Online Learning for Linearly Parametrized Control Problems. PhD thesis, University of Alberta, 2012.
[7] Ronald Ortner and Daniil Ryabko. Online regret bounds for undiscounted continuous reinforcement
learning. In NIPS, 2012.
[8] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Experts in a Markov decision process. In NIPS,
2004.
[9] Jia Yuan Yu and Shie Mannor. Arbitrarily modulated Markov decision processes. In IEEE Conference on
Decision and Control, 2009.
[10] Jia Yuan Yu and Shie Mannor. Online learning in Markov decision processes with arbitrarily changing
rewards and transitions. In GameNets, 2009.
[11] Gergely Neu, Andr?as Gy?orgy, and Csaba Szepesv?ari. The adversarial stochastic shortest path problem
with unknown transition probabilities. In AISTATS, 2012.
[12] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Online Markov decision processes. Mathematics
of Operations Research, 34(3):726?736, 2009.
[13] Eyal Even-Dar. Personal communication., 2013.
[14] Gergely Neu, Andr?as Gy?orgy, Csaba Szepesv?ari, and Andr?as Antos. Online Markov decision processes
under bandit feedback. In NIPS, 2010.
[15] Vladimir Vovk. Aggregating strategies. In COLT, pages 372?383, 1990.
[16] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994.
[17] Sascha Geulen, Berthold V?ocking, and Melanie Winkler. Regret minimization for online buffering problems using the weighted majority algorithm. In COLT, 2010.
[18] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291?307, 2005.
[19] Gergely Neu, Andr?as Gy?orgy, and Csaba Szepesv?ari. The online loop-free stochastic shortest path problem. In COLT, 2010.
[20] Adam Tauman Kalai, Yishay Mansour, and Elad Verbin. On agnostic boosting and parity learning. In
STOC, pages 629?638, 2008.
[21] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. In STOC, pages
84?93, 2005.
[22] Gergely Neu, Andr?as Gy?orgy, and Csaba Szepesv?ari. The adversarial stochastic shortest path problem
with unknown transition probabilities. In AISTATS, 2012.
[23] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning
Research, 2002.
[24] V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic linear optimization under bandit feedback. In Rocco
Servedio and Tong Zhang, editors, COLT, pages 355?366, 2008.
[25] Yasin Abbasi-Yadkori, D?avid P?al, and Csaba Szepesv?ari. Improved algorithms for linear stochastic bandits. In NIPS, 2011.
[26] Nick Littlestone. From on-line to batch learning. In COLT, pages 269?284, 1989.
[27] Varun Kanade and Thomas Steinke. Learning hurdles for sleeping experts. In Proceedings of the 3rd
Innovations in Theoretical Computer Science Conference, ITCS ?12, pages 11?18, 2012.
9
| 4975 |@word trial:1 exploitation:1 version:1 polynomial:5 norm:1 stronger:1 seems:1 suitably:1 queensland:2 decomposition:1 covariance:1 pick:1 harder:1 reduction:5 contains:2 selecting:1 interestingly:1 com:1 nt:4 gmail:1 must:2 ronald:1 subsequent:1 update:2 n0:9 v:1 stationary:3 selected:2 warmuth:1 beginning:2 ith:1 short:1 manfred:1 mannor:3 node:27 boosting:1 zhang:1 along:2 driver:1 katehakis:2 apostolos:1 yuan:2 prove:2 introduce:1 hardness:6 expected:3 presumed:1 sublinearly:1 frequently:1 yasin:5 alberta:2 actual:1 xti:1 totally:1 bounded:5 notation:1 agnostic:8 what:1 argmin:1 interpreted:2 minimizes:2 csaba:7 guarantee:2 berkeley:4 every:1 hypothetical:1 exactly:2 control:4 appear:1 t1:1 understood:1 aggregating:1 sd:2 switching:1 encoding:2 path:49 lugosi:2 might:4 black:1 au:1 directed:4 unique:1 regret:47 episodic:5 maxx:1 weather:1 pre:1 confidence:2 get:1 layered:2 nb:1 influence:2 deterministic:5 map:1 go:4 starting:1 communicating:1 q:1 handle:1 variation:1 imagine:1 pt:24 suppose:4 ualberta:1 modulo:1 yishay:3 designing:4 element:1 expensive:2 observed:1 solved:3 hv:1 worst:1 ryabko:2 episode:3 trade:1 observes:4 mentioned:1 complexity:2 reward:1 dynamic:8 personal:1 weakly:1 solving:1 upon:1 learner:14 sink:1 easily:1 distinct:1 forced:1 onfidence:3 choosing:4 whose:2 widely:1 valued:2 solve:1 say:1 elad:1 otherwise:5 ability:1 winkler:1 think:1 noisy:5 online:53 sequence:7 advantage:2 took:1 propose:3 product:1 loop:1 mixing:4 p:1 undiscounted:1 adam:2 converges:1 derive:2 qt:3 received:1 c:2 vkanade:1 stochastic:12 exploration:1 require:1 cn3:1 hold:4 exp:1 mapping:4 claim:2 major:1 achieves:4 consecutive:1 applicable:1 label:1 maker:7 weighted:5 minimization:1 dani:3 offs:1 always:3 kalai:3 varying:1 kvk1:1 corollary:2 check:1 adversarial:20 typically:1 bt:5 unlikely:2 a0:3 abor:1 bandit:5 transformed:1 subroutine:1 colt:6 denoted:2 special:6 initialize:1 uc:2 breakthrough:1 santosh:1 construct:2 adversarially:13 buffering:1 yu:3 future:3 oblivious:3 ortner:3 randomly:3 n1:3 organization:1 nl:6 behind:1 antos:1 edge:27 traversing:1 indexed:1 continuing:1 littlestone:3 theoretical:1 instance:1 cover:1 lattice:1 vertex:2 uniform:1 daniil:1 burnetas:2 eec:2 chooses:8 michael:1 na:1 verbin:1 gergely:4 thesis:1 abbasi:6 cesa:2 choose:12 possibly:2 external:2 corner:1 expert:6 gy:4 satisfy:2 vi:1 depends:2 vehicle:2 eyal:3 start:9 maintains:1 jia:2 contribution:1 minimize:2 square:1 ni:1 efficiently:4 itcs:1 notoriously:3 omdp:10 reach:3 suffers:5 neu:8 definition:1 competitor:1 against:1 servedio:1 proof:5 associated:2 ask:1 recall:2 dimensionality:2 ea:1 auer:2 feed:1 originally:1 varun:2 follow:1 improved:2 evaluated:1 box:1 xa:3 just:1 stage:1 sketch:1 hand:1 mdp:24 grows:2 usa:1 effect:1 true:1 hence:1 regularization:1 jaksch:3 geulen:1 deal:1 round:26 adjacent:1 game:7 szepesva:1 covering:1 noted:1 ari:8 recently:1 mt:10 exponentially:2 discussed:1 extend:1 m1:1 cambridge:1 ai:1 rd:2 trivially:1 mathematics:2 portfolio:1 robot:2 access:1 gt:15 belongs:1 termed:1 claimed:1 meta:3 binary:1 arbitrarily:2 vt:2 somewhat:1 steering:1 shortest:24 ii:2 full:4 sham:2 reduces:1 d0:3 ing:1 clinical:1 long:2 believed:1 a1:6 ensuring:1 prediction:1 variant:1 expectation:2 metric:2 kernel:12 represent:2 accord:1 sleeping:1 ewa:6 szepesv:8 addition:1 hurdle:1 else:1 grow:1 source:1 suffered:4 unlike:2 shie:2 call:1 near:1 revealed:4 iii:2 easy:2 switch:1 finish:9 identified:1 competing:1 agnostically:1 inner:1 idea:1 avid:1 tradeoff:1 bartlett:5 suffer:1 peter:1 york:1 action:24 dar:5 remark:2 tewari:3 amount:1 generate:2 exist:1 andr:5 notice:3 achieving:1 drawn:2 changing:5 graph:51 excludes:1 sum:1 allp:1 compete:2 run:6 parameterized:1 named:1 extends:1 decision:21 appendix:3 bound:13 ct:12 layer:10 followed:1 played:5 quadratic:1 g:2 occur:1 generates:2 min:2 vempala:2 according:4 ball:1 kd:1 across:1 wi:1 kakade:3 modification:2 happens:1 restricted:1 taken:1 computationally:6 previously:1 remains:1 discus:1 know:3 end:9 ocking:1 available:1 operation:2 observe:1 appropriate:2 yadkori:6 batch:2 thomas:1 denotes:2 remaining:1 giving:1 k1:2 move:3 strategy:1 regev:1 rt:9 rocco:1 poker:1 parametrized:1 majority:2 considers:1 trivial:1 abbasiyadkori:1 length:1 code:1 vladimir:1 innovation:1 difficult:4 unfortunately:1 stoc:2 stated:1 design:6 cryptographic:3 policy:65 unknown:2 bianchi:2 observation:1 markov:10 benchmark:1 finite:4 extended:1 communication:1 precise:1 rn:2 mansour:3 arbitrary:2 regal:1 pair:1 specified:1 connection:1 nick:2 nip:4 adversary:22 ia:5 difficulty:1 melanie:1 scheme:2 technology:2 mdps:6 imply:1 prior:1 l2:1 kf:1 nicol:1 loss:66 par:8 fully:1 sublinear:2 acyclic:4 proven:1 incurred:1 sufficient:1 consistent:1 editor:1 pi:2 parity:26 copy:1 arriving:1 last:1 free:1 offline:1 steinke:2 face:1 taking:1 characterizing:1 tauman:1 feedback:2 maxl:1 berthold:1 transition:25 adaptive:2 reinforcement:3 polynomially:1 retracted:1 reveals:1 uai:1 hayes:1 sascha:1 leader:1 xi:3 terrain:1 continuous:3 kanade:3 ca:1 obtaining:1 interact:1 orgy:4 necessarily:1 poly:6 protocol:1 aistats:2 main:2 linearly:5 yevgeny:2 noise:3 allowed:1 repeated:1 cryptography:1 x1:2 qut:2 oded:1 ny:1 tong:1 shrinking:1 kdp:1 xl:3 candidate:1 exponential:1 ib:5 theorem:11 xt:21 dartboard:1 intractable:1 exists:7 essential:1 sequential:1 phd:1 entropy:1 simply:1 likely:1 seldin:2 lazy:1 applies:2 satisfies:4 qall:1 goal:6 room:2 lipschitz:1 absence:1 hard:7 change:12 determined:3 infinite:1 uniformly:1 corrected:1 wt:2 vovk:1 lemma:1 total:4 called:3 pas:1 player:1 m3:1 rarely:4 people:1 modulated:1 evaluate:2 |
4,392 | 4,976 | Online Learning of Dynamic Parameters
in Social Networks
Shahin Shahrampour 1
Alexander Rakhlin 2
Ali Jadbabaie 1
2
Department of Electrical and Systems Engineering, Department of Statistics
University of Pennsylvania
Philadelphia, PA 19104 USA
1
{shahin,jadbabai}@seas.upenn.edu 2 [email protected]
1
Abstract
This paper addresses the problem of online learning in a dynamic setting. We
consider a social network in which each individual observes a private signal about
the underlying state of the world and communicates with her neighbors at each
time period. Unlike many existing approaches, the underlying state is dynamic,
and evolves according to a geometric random walk. We view the scenario as an
optimization problem where agents aim to learn the true state while suffering the
smallest possible loss. Based on the decomposition of the global loss function, we
introduce two update mechanisms, each of which generates an estimate of the true
state. We establish a tight bound on the rate of change of the underlying state, under which individuals can track the parameter with a bounded variance. Then, we
characterize explicit expressions for the steady state mean-square deviation(MSD)
of the estimates from the truth, per individual. We observe that only one of the
estimators recovers the optimal MSD, which underscores the impact of the objective function decomposition on the learning quality. Finally, we provide an upper
bound on the regret of the proposed methods, measured as an average of errors in
estimating the parameter in a finite time.
1
Introduction
In recent years, distributed estimation, learning and prediction has attracted a considerable attention
in wide variety of disciplines with applications ranging from sensor networks to social and economic
networks [1?6]. In this broad class of problems, agents aim to learn the true value of a parameter
often called the underlying state of the world. The state could represent a product, an opinion, a
vote, or a quantity of interest in a sensor network. Each agent observes a private signal about the
underlying state at each time period, and communicates with her neighbors to augment her imperfect
observations. Despite the wealth of research in this area when the underlying state is fixed (see
e.g. [1?3, 7]), often the state is subject to some change over time(e.g. the price of stocks) [8?11].
Therefore, it is more realistic to study models which allow the parameter of interest to vary. In
the non-distributed context, such models have been studied in the classical literature on time-series
prediction, and, more recently, in the literature on online learning under relaxed assumptions about
the nature of sequences [12]. In this paper we aim to study the sequential prediction problem in the
context of a social network and noisy feedback to agents.
We consider a stochastic optimization framework to describe an online social learning problem when
the underlying state of the world varies over time. Our motivation for the current study is the results
of [8] and [9] where authors propose a social learning scheme in which the underlying state follows
a simple random walk. However, unlike [8] and [9], we assume a geometric random walk evolution
with an associated rate of change. This enables us to investigate the interplay of social learning,
network structure, and the rate of state change, especially in the interesting case that the rate is
1
greater than unity. We then pose the social learning as an optimization problem in which individuals
aim to suffer the smallest possible loss as they observe the stream of signals. Of particular relevance
to this work is the work of Duchi et al. in [13] where the authors develop a distributed method
based on dual averaging of sub-gradients to converge to the optimal solution. In this paper, we
restrict our attention to quadratic loss functions regularized by a quadratic proximal function, but
there is no fixed optimal solution as the underlying state is dynamic. In this direction, the key
observation is the decomposition of the global loss function into local loss functions. We consider
two decompositions for the global objective, each of which gives rise to a single-consensus-step
belief update mechanism. The first method incorporates the averaged prior beliefs among neighbors
with the new private observation, while the second one takes into account the observations in the
neighborhood as well. In both scenarios, we establish that the estimates are eventually unbiased, and
we characterize an explicit expression for the mean-square deviation(MSD) of the beliefs from the
truth, per individual. Interestingly, this quantity relies on the whole spectrum of the communication
matrix which exhibits the formidable role of the network structure in the asymptotic learning. We
observe that the estimators outperform the upper bound provided for MSD in the previous work [8].
Furthermore, only one of the two proposed estimators can compete with the centralized optimal
Kalman Filter [14] in certain circumstances. This fact underscores the dependence of optimality
on decomposition of the global loss function. We further highlight the influence of connectivity on
learning by quantifying the ratio of MSD for a complete versus a disconnected network. We see that
this ratio is always less than unity and it can get arbitrarily close to zero under some constraints.
Our next contribution is to provide an upper bound for regret of the proposed methods, defined as
an average of errors in estimating the parameter up to a given time minus the long-run expected
loss due to noise and dynamics alone. This finite-time regret analysis is based on the recently
developed concentration inequalities for matrices and it complements the asymptotic statements
about the behavior of MSD.
Finally, we examine the trade-off between the network sparsity and learning quality in a microscopic
level. Under mild technical constraints, we see that losing each connection has detrimental effect on
learning as it monotonically increases the MSD. On the other hand, capturing agents communications with a graph, we introduce the notion of optimal edge as the edge whose addition has the most
effect on learning in the sense of MSD reduction. We prove that such a friendship is likely to occur
between a pair of individuals with high self-reliance that have the least common neighbors.
2
2.1
Preliminaries
State and Observation Model
We consider a network consisting of a finite number of agents V = {1, 2, ..., N }. The agents
indexed by i ? V seek the underlying state of the world, xt ? R, which varies over time and evolves
according to
xt+1 = axt + rt ,
(1)
where rt is a zero mean innovation, which is independent over time with finite variance E[rt2 ] = ?r2 ,
and a ? R is the expected rate of change of the state of the world, assumed to be available to all
agents, and could potentially be greater than unity. We assume the initial state x0 is a finite random
variable drawn independently by the nature. At time period t, each agent i receives a private signal
yi,t ? R, which is a noisy version of xt , and can be described by the linear equation
yi,t = xt + wi,t ,
(2)
2
2
where wi,t is a zero mean observation noise with finite variance E[wi,t
] = ?w
, and it is assumed to
be independent over time and agents, and uncorrelated to the innovation noise. Each agent i forms
an estimate or a belief about the true value of xt at time t conforming to an update mechanism that
will be discussed later. Much of the difficulty of this problem stems from the hardness of tracking a
dynamic state with noisy observations, especially when |a| > 1, and communication mitigates the
difficulty by virtue of reducing the effective noise.
2
2.2
Communication Structure
Agents communicate with each other to update their beliefs about the underlying state of the world.
The interaction between agents is captured by an undirected graph G = (V, E), where V is the set
?i = {j ?
of agents, and if there is a link between agent i and agent j, then {i, j} ? E. We let N
?
V : {i, j} ? E} be the set of neighbors of agent i, and Ni = Ni ? {i}. Each agent i can only
?i . We also let pii ? 0
communicate with her neighbors, and assigns a weight pij > 0 for any j ? N
denote the self-reliance of agent i.
Assumption 1. The communication matrix P = [pij ] is symmetric and doubly stochastic, i.e., it
satisfies
pij ? 0
,
pij = pji
, and
?
pij =
N
?
pij = 1.
j=1
j?Ni
We further assume the eigenvalues of P are in descending order and satisfy
?1 < ?N (P ) ? ... ? ?2 (P ) < ?1 (P ) = 1.
2.3
Estimate Updates
The goal of agents is to learn xt in a collaborative manner by making sequential predictions. From
optimization perspective, this can be cast as a quest for online minimization of the separable, global,
time-varying cost function
?
?
N ?
N ?
N
?
?2
1 ? ?
1 ?
1 ? ?
?
min ft (?
x) =
fi,t (?
x) ? E yi,t ? x
?
=
fi,t (?
x) ?
pij fj,t (?
x) , (3)
x
??R
N i=1
2
N i=1
j=1
at each time period t. One approach to tackle the stochastic learning problem formulated above is to
employ distributed dual averaging regularized by a quadratic proximal function [13]. To this end, if
agent i exploits f?i,t as the local loss function, she updates her belief as
? ?
?
x
?i,t+1 = a
pij x
?j,t + ?(yi,t ? x
?i,t ) ,
(4)
?
??
?
j?Ni
? ?? ? innovation update
consensus update
while using f?i,t as the local loss function results in the following update
? ?
?
?
x
?i,t+1 = a
pij x
?j,t + ?(
pij yj,t ? x
?i,t ) ,
j?Ni
?
??
?
?
consensus update
j?Ni
??
innovation update
(5)
?
where ? ? (0, 1] is a constant step size that agents place for their innovation update, and we refer
to it as signal weight. Equations (4) and (5) are distinct, single-consensus-step estimators differing
in the choice of the local loss function with (4) using only private observations while (5) averaging
observations over the neighborhood. We analyze both class of estimators noting that one might
expect (5) to perform better than (4) due to more information availability.
Note that the choice of constant step size provides an insight on the interplay of persistent innovation
and learning abilities of the network. We remark that agents can easily learn the fixed rate of change
a by taking ratios of observations, and we assume that this has been already performed by the agents
in the past. The case of a changing a is beyond the scope of the present paper. We also point out that
the real-valued (rather than vector-valued) nature of the state is a simplification that forms a clean
playground for the study of the effects of social learning, effects of friendships, and other properties
of the problem.
2.4
Error Process
Defining the local error processes ??i,t and ??i,t , at time t for agent i, as
??i,t ? x
?i,t ? xt
and
3
??i,t ? x
?i,t ? xt ,
and stacking the local errors in vectors ??t , ??t ? RN , respectively, such that
??t ? [??1,t , ..., ??N,t ]T
and
??t ? [??1,t , ..., ??N,t ]T ,
(6)
one can show that the aforementioned collective error processes could be described as a linear dynamical system.
Lemma 2. Given Assumption 1, the collective error processes ??t and ??t defined in (6) satisfy
??t+1 = Q??t + s?t
and
??t+1 = Q??t + s?t ,
(7)
respectively, where
(8)
Q = a(P ? ?IN ),
and
and
s?t = (?a)[w1,t , ..., wN,t ]T ? rt 1N
with 1N being vector of all ones.
s?t = (?a)P [w1,t , ..., wN,t ]T ? rt 1N ,
(9)
Throughout the paper, we let ?(Q), denote the spectral radius of Q, which is equal to the largest
singular value of Q due to symmetry.
3
Social Learning: Convergence of Beliefs and Regret Analysis
In this section, we study the behavior of estimators (4) and (5) in the mean and mean-square sense,
and we provide the regret analysis.
In the following proposition, we establish a tight bound for a, under which agents can achieve
asymptotically unbiased estimates using proper signal weight.
Proposition 3 (Unbiased Estimates). Given the network G with corresponding communication matrix P satisfying Assumption 1, the rate of change of the social network in (4) and (5) must respect
the constraint
2
|a| <
,
1 ? ?N (P )
to allow agents to form asymptotically unbiased estimates of the underlying state.
Proposition 3 determines the trade-off between the rate of change and the network structure. In other
words, changing less than the rate given in the statement of the proposition, individuals can always
track xt with bounded variance by selecting an appropriate signal weight. However, the proposition
does not make any statement on the learning quality. To capture that, we define the steady state
Mean Square Deviation(MSD) of the network from the truth as follows.
Definition 4 ((Steady State-)Mean Square Deviation). Given the network G with a rate of change
which allows unbiased estimation, the steady state of the error processes in (7) is defined as follows
? ? lim E[??t ??T ] and ?
? ? lim E[??t ??T ].
?
t??
t
t??
t
Hence, the (Steady State-)Mean Square Deviation of the network is the deviation from the truth in
the mean-square sense, per individual, and it is defined as
?
?
? ? 1 Tr(?)
? ? 1 Tr(?).
MSD
and MSD
N
N
Theorem 5 (MSD). Given the error processes (7) with ?(Q) < 1, the steady state MSD for (4) and
(5) is a function of the communication matrix P , and the signal weight ? as follows
? M SD (P, ?)
? M SD (P, ?), (10)
?
?
MSD(P,
?) = RM SD (?) + W
MSD(P,
?) = RM SD (?) + W
where
RM SD (?) ?
and
1?
?r2
2
a (1 ?
?)2
,
(11)
N
N
2 2
2
?
?
a2 ? 2 ? w
?i (P )
a2 ? 2 ? w
? M SD (P, ?) ? 1
? M SD (P, ?) ? 1
W
and
W
. (12)
2
2
2
N i=1 1 ? a (?i (P ) ? ?)
N i=1 1 ? a (?i (P ) ? ?)2
4
Theorem 5 shows that the steady state MSD is governed by all eigenvalues of P contributing to
WM SD pertaining to the observation noise, while RM SD is the penalty incurred due to the innovation noise. Moreover, (5) outperforms (4) due to richer information diffusion, which stresses the
importance of global loss function decomposition.
One might advance a conjecture that a complete network, where all individuals can communicate
with each other, achieves a lower steady state MSD in the learning process since it provides the
most information diffusion among other networks. This intuitive idea is discussed in the following
corollary beside a few examples.
Corollary 6. Denoting the complete, star, and cycle graphs on N vertices by KN , SN , and CN , respectively, and denoting their corresponding Laplacians by LKN , LSN , and LCN , under conditions
of Theorem 5,
(a) For P = I ?
1??
N LK N ,
we have
2
? KN = RM SD (?) + a2 ?2 ?w
lim MSD
.
(13)
N ??
(b) For P = I ?
1??
N L SN ,
we have
? SN = RM SD (?) +
lim MSD
N ??
2
a 2 ? 2 ?w
.
1 ? a2 (1 ? ?)2
(14)
(c) For P = I ? ?LCN , where ? must preserve unbiasedness, we have
? 2?
2
a 2 ? 2 ?w
d?
?
lim MSDCN = RM SD (?) +
.
2
2
N ??
1 ? a (1 ? ?(2 ? 2 cos(? )) ? ?) 2?
0
(d) For P = I ?
1
N LK N ,
(15)
we have
? KN = RM SD (?).
lim MSD
(16)
N ??
Proof. Noting that the spectrum of LKN , LSN and LCN are, respectively [15], {?N = 0, ?N ?1 =
N ?1
N, ..., ?1 = N }, {?N = 0, ?N ?1 = 1, ..., ?2 = 1, ?1 = N }, and {?i = 2 ? 2 cos( 2?i
N )}i=0 ,
substituting each case in (10), and taking the limit over N , the proof follows immediately.
To study the effect of communication let us consider the estimator (4). Under purview of Theorem
5 and Corollary 6, the ratio of the steady state MSD for a complete network (13) versus a fully
disconnected network(P = IN ) can be computed as
lim
N ??
? K
MSD
N
? disconnected
MSD
=
2
?r2 + a2 ?2 ?w
(1 ? a2 (1 ? ?)2 )
? 1 ? a2 (1 ? ?)2 ,
2
2
?r + a 2 ? 2 ?w
2
for ?r2 ? ?w
. The ratio above can get arbitrary close to zero which, indeed, highlights the influence
of communication on the learning quality.
We now consider Kalman Filter(KF) [14] as the optimal centralized counterpart of (5). It is wellknown that the steady state KF satisfies a Riccati equation, and when the parameter of interest is
scalar, the Riccati equation simplifies to a quadratic with the positive root
?
2
2
2 ? ? 2 + N ? 2 )2 + 4N ? 2 ? 2
a 2 ?w
? ?w
+ N ?r2 + (a2 ?w
w
r
w r
?KF =
.
2N
Therefore, comparing with the complete graph (16), we have
lim ?KF = ?r2 ?
N ??
1?
?r2
2
a (1 ?
?)2
,
and the upper bound can be made tight by choosing ? = 1 for |a| <
we should choose an ? < 1 to preserve unbiasedness as well.
5
1
|?N (P )?1| .
If |a| ?
1
|?N (P )?1|
On the other hand, to evaluate the performance of estimator (4), we consider the upper bound
2
?r2 + ?2 ?w
,
(17)
?
2
derived in [8], for a = 1 via a distributed estimation scheme. For simplicity, we assume ?w
= ?r2 =
2
? , and let ? in (15) be any diminishing function of N . Optimizing (13), (14), (15), and (17) over
?, we obtain
? K ? 1.55? 2 < lim MSD
? S = lim MSD
? C ? 1.62? 2 < MSDBound = 2? 2 ,
lim MSD
MSDBound =
N ??
N
N ??
N
N ??
N
which suggests a noticeable improvement in learning even in the star and cycle networks where the
number of individuals and connections are in the same order.
Regret Analysis
We now turn to finite-time regret analysis of our methods. The average loss of all agents in predicting
the state, up until time T , is
T
N
T
1? 1 ?
1? 1
(?
xi,t ? xt )2 =
Tr(??t ??tT ) .
T t=1 N i=1
T t=1 N
As motivated earlier, it is not possible, in general, to drive this average loss to zero, and we need to
subtract off the limit. We thus define regret as
?
?
T
T
T
?
?
1? 1
1
1
1
1
T
T
? = Tr
? ,
RT ?
Tr(??t ??t ) ?
Tr(?)
??t ?? ? ?
T t=1 N
T t=1 N
N
T t=1 t
? is from Definition 4. We then have for the spectral norm ? ? ? that
where ?
?
?
T
?1 ?
?
?
?
RT ? ?
?t ?tT ? ?? ,
?T
?
(18)
t=1
where we dropped the distinguishing notation between the two estimators since the analysis works
for both of them. We, first, state a technical lemma from [16] that we invoke later for bounding the
quantity RT . For simplicity, we assume that magnitudes of both innovation and observation noise
are bounded.
Lemma 7. Let {st }Tt=1 be an independent family of vector valued random variables, and let H
be a function that maps T variables to a self-adjoint matrix of dimension N . Consider a sequence
{At }Tt=1 of fixed self-adjoint matrices that satisfy
?
?2
H(?1 , ..., ?t , ..., ?T ) ? H(?1 , ..., ?t? , ..., ?T ) ? A2t ,
where ?i and ?i? range over all possible values of si for each index i. Letting Var = ?
for all c ? 0, we have
?
?
?
?
2
?
?
P H(s1 , ..., sT ) ? E[H(s1 , ..., sT )] ? c ? N e?c /8Var .
?T
t=1
A2t ?,
Theorem 8. Under conditions of Theorem 5 together with boundedness of noise maxt?T ?st ? ? s
for some s > 0, the regret function defined in (18) satisfies
?
?
?
?
?
?
?
2
2
2
8s
2 log N?
1
??0 ?
1
2s??0 ?
1
s
1
?
RT ?
+
+
+
,
?
?2
?
?2
T 1 ? ?2 (Q)
T
T
T (1 ? ?(Q))2
1 ? ?(Q)
1 ? ?2 (Q)
(19)
with probability at least 1 ? ?.
We mention that results that are similar in spirit have been studied for general unbounded stationary
ergodic time series in [17?19] by employing techniques from the online learning literature. On the
other hand, our problem has the network structure and the specific evolution of the hidden state, not
present in the above works.
6
4
The Impact of New Friendships on Social Learning
In the social learning model we proposed, agents are cooperative and they aim to accomplish a global
objective. In this direction, the network structure contributes substantially to the learning process.
In this section, we restrict our attention to estimator (5), and characterize the intuitive idea that making(losing) friendships can influence the quality of learning in the sense of decreasing(increasing)
the steady state MSD of the network.
To commence, letting ei denote the i-th unit vector in the standard basis of RN , we exploit the
negative semi-definite, edge function matrix
?P (i, j) ? ?(ei ? ej )(ei ? ej )T ,
(20)
P? ? P + ??P (i, j),
(21)
for edge addition(removal) to(from) the graph. Essentially, if there is no connection between agents
i and j,
for ? < min{pii , pjj }, corresponds to a new communication matrix adding the edge {i, j} with a
weight ? to the network G, and subtracting ? from self-reliance of agents i and j.
Proposition 9. Let G ? be the network resulted by removing the bidirectional edge {i, j} with the
weight ? from the network G, so P?? and P denote the communication matrices associated to G ?
and G, respectively. Given Assumption 1, for a fixed signal weight ? the following relationship holds
?
?
MSD(P,
?) ? MSD(P
(22)
?? , ?),
as long as P is positive semi-definite, and |a| <
1
|?| .
Under a mild technical assumption, Proposition 9 suggests that losing connections monotonically
increases the MSD, and individuals tend to maintain their friendships to obtain a lower MSD as a
global objective. However, this does not elaborate on the existence of individuals with whom losing
or making connections could have an immense impact on learning. We bring this concept to light
in the following proposition with finding a so-called optimal edge which provides the most MSD
reduction, in case it is added to the network graph.
1
Proposition 10. Given Assumption 1, a positive semi-definite P , and |a| < |?|
, to find the optimal
edge with a pre-assigned weight ? ? 1 to add to the network G, we need to solve the following
optimization problem
?
??
N ?
?
zk (i, j) 2(1 ? ?2 a2 )?k (P ) + 2a2 ??2k (P )
min
hk (i, j) ?
,
(23)
?
?2
{i,j}?E
/
1 ? a2 (?k (P ) ? ?)2
k=1
where
zk (i, j) ? (vkT ?P (i, j)vk )?,
(24)
and {vk }N
k=1 are the right eigenvectors of P . In addition, letting ?max = maxk>1 |?k (P ) ? ?|,
?
?
N
?
?2? (1 ? ?2 a2 )(pii + pjj ) + a2 ?([P 2 ]ii + [P 2 ]jj ? 2[P 2 ]ij )
min
hk (i, j) ? min
.
?
?2
2
{i,j}?E
/
{i,j}?E
/
1 ? a2 ?max
k=1
(25)
Proof. Representing the first order approximation of ?k (P? ) using definition of zk (i, j) in (24), we
have ?k (P? ) ? ?k (P ) + zk (i, j) for ? ? 1. Based on Theorem 5, we now derive
??
?
N ?
?
?k (P? ) ? ?k (P ) (1 ? ?2 a2 )(?k (P? ) + ?k (P )) + 2a2 ??k (P )?k (P? )
?
??
?
1 ? a2 (?k (P ) ? ?)2 1 ? a2 (?k (P? ) ? ?)2
k=1
?
?
N
?
zk (i, j) 2(1 ? ?2 a2 )?k (P ) + 2a2 ??2k (P ) + (1 ? ?2 a2 + 2a2 ??k (P ))zk (i, j)
?
??
?
?
1 ? a2 (?k (P ) ? ?)2 1 ? a2 (?k (P ) ? ? + zk (i, j))2
k=1
?
?
N
?
zk (i, j) 2(1 ? ?2 a2 )?k (P ) + 2a2 ??2k (P )
=
+ O(?2 ),
?
?2
1 ? a2 (?k (P ) ? ?)2
k=1
?
?
MSD(P
? , ?) ? MSD(P, ?) ?
7
?
?
noting that zk (i, j) is O(?) from the definition (24). Minimizing MSD(P
? , ?) ? MSD(P, ?) is,
hence, equivalent to optimization (23) when ? ?
1.
Taking
into
account
that
P
is
positive
semi?
definite, zk (i, j) ? 0 for k ? 2, and v1 = 1N / N which implies z1 (i, j) = 0, we proceed to the
lower bound proof using the definition of hk (i, j) and ?max in the statement of the proposition, as
follows
?
?
N
N
?
?
zk (i, j) 2(1 ? ?2 a2 )?k (P ) + 2a2 ??2k (P )
hk (i, j) =
?
?2
1 ? a2 (?k (P ) ? ?)2
k=1
k=2
??
1
2
1 ? a2 ?max
?2
N
?
k=2
?
?
zk (i, j) 2(1 ? ?2 a2 )?k (P ) + 2a2 ??2k (P ) .
Substituting zk (i, j) from (24) to above, we have
??
?
N
N
?
? T
??
?
2?
2 2
2
2
hk (i, j) ? ?
v
?P
(i,
j)v
(1
?
?
a
)?
(P
)
+
a
??
(P
)
?2
k
k
k
k
2
1 ? a2 ?max
k=1
k=1
?
?
N
?
?
? T
2?
2 2
2
2
=?
Tr
?P
(i,
j)
(1
?
?
a
)?
(P
)
+
a
??
(P
)
v
v
?2
k
k k
k
2
1 ? a2 ?max
k=1
?
?
?
?
2?
2 2
2
2
.
=?
?2 Tr ?P (i, j) (1 ? ? a )P + a ?P
2
1 ? a2 ?max
Using the facts that Tr(?P (i, j)P ) = ?pii ? pjj + 2pij and Tr(?P (i, j)P 2 ) = ?[P 2 ]ii ? [P 2 ]jj +
2[P 2 ]ij according to definition of ?P (i, j) in (20), and pij = 0 since we are adding a non-existent
edge {i, j}, the lower bound (25) is derived.
Beside posing the optimal edge problem as an optimization, Proposition 10 also provides an upper bound for the best improvement that making a friendship brings to the network. In view of
(25), forming a connection between two agents with more self-reliance and less common neighbors,
minimizes the lower bound, which offers the most maneuver for MSD reduction.
5
Conclusion
We studied a distributed online learning problem over a social network. The goal of agents is to
estimate the underlying state of the world which follows a geometric random walk. Each individual
receives a noisy signal about the underlying state at each time period, so she communicates with her
neighbors to recover the true state. We viewed the problem with an optimization lens where agents
want to minimize a global loss function in a collaborative manner. To estimate the true state, we
proposed two methodologies derived from a different decomposition of the global objective. Given
the structure of the network, we provided a tight upper bound on the rate of change of the parameter which allows agents to follow the state with a bounded variance. Moreover, we computed the
averaged, steady state, mean-square deviation of the estimates from the true state. The key observation was optimality of one of the estimators indicating the dependence of learning quality on the
decomposition. Furthermore, defining the regret as the average of errors in the process of learning
during a finite time
? T , we demonstrated that the regret function of the proposed algorithms decays
with a rate O(1/ T ). Finally, under mild technical assumptions, we characterized the influence of
network pattern on learning by observing that each connection brings a monotonic decrease in the
MSD.
Acknowledgments
We gratefully acknowledge the support of AFOSR MURI CHASE, ONR BRC Program on Decentralized, Online Optimization, NSF under grants CAREER DMS-0954737 and CCF-1116928, as
well as Dean?s Research Fund.
8
References
[1] M. H. DeGroot, ?Reaching a consensus,? Journal of the American Statistical Association,
vol. 69, no. 345, pp. 118?121, 1974.
[2] A. Jadbabaie, P. Molavi, A. Sandroni, and A. Tahbaz-Salehi, ?Non-bayesian social learning,?
Games and Economic Behavior, vol. 76, no. 1, pp. 210?225, 2012.
[3] E. Mossel and O. Tamuz, ?Efficient bayesian learning in social networks with gaussian estimators,? arXiv preprint arXiv:1002.0747, 2010.
[4] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao, ?Optimal distributed online prediction
using mini-batches,? The Journal of Machine Learning Research, vol. 13, pp. 165?202, 2012.
[5] L. Xiao, S. Boyd, and S. Lall, ?A scheme for robust distributed sensor fusion based on average
consensus,? in Fourth International Symposium on Information Processing in Sensor Networks.
IEEE, 2005, pp. 63?70.
[6] S. Kar, J. M. Moura, and K. Ramanan, ?Distributed parameter estimation in sensor networks:
Nonlinear observation models and imperfect communication,? IEEE Transactions on Information Theory, vol. 58, no. 6, pp. 3575?3605, 2012.
[7] S. Shahrampour and A. Jadbabaie, ?Exponentially fast parameter estimation in networks using
distributed dual averaging,? arXiv preprint arXiv:1309.2350, 2013.
[8] D. Acemoglu, A. Nedic, and A. Ozdaglar, ?Convergence of rule-of-thumb learning rules in
social networks,? in 47th IEEE Conference on Decision and Control, 2008, pp. 1714?1720.
[9] R. M. Frongillo, G. Schoenebeck, and O. Tamuz, ?Social learning in a changing world,? in
Internet and Network Economics. Springer, 2011, pp. 146?157.
[10] U. A. Khan, S. Kar, A. Jadbabaie, and J. M. Moura, ?On connectivity, observability, and
stability in distributed estimation,? in 49th IEEE Conference on Decision and Control, 2010,
pp. 6639?6644.
[11] R. Olfati-Saber, ?Distributed kalman filtering for sensor networks,? in 46th IEEE Conference
on Decision and Control, 2007, pp. 5492?5498.
[12] N. Cesa-Bianchi, Prediction, learning, and games.
Cambridge University Press, 2006.
[13] J. C. Duchi, A. Agarwal, and M. J. Wainwright, ?Dual averaging for distributed optimization:
convergence analysis and network scaling,? IEEE Transactions on Automatic Control, vol. 57,
no. 3, pp. 592?606, 2012.
[14] R. E. Kalman et al., ?A new approach to linear filtering and prediction problems,? Journal of
basic Engineering, vol. 82, no. 1, pp. 35?45, 1960.
[15] M. Mesbahi and M. Egerstedt, Graph theoretic methods in multiagent networks.
University Press, 2010.
Princeton
[16] J. A. Tropp, ?User-friendly tail bounds for sums of random matrices,? Foundations of Computational Mathematics, vol. 12, no. 4, pp. 389?434, 2012.
[17] G. Biau, K. Bleakley, L. Gy?orfi, and G. Ottucs?ak, ?Nonparametric sequential prediction of
time series,? Journal of Nonparametric Statistics, vol. 22, no. 3, pp. 297?317, 2010.
[18] L. Gyorfi and G. Ottucsak, ?Sequential prediction of unbounded stationary time series,? IEEE
Transactions on Information Theory, vol. 53, no. 5, pp. 1866?1872, 2007.
[19] L. Gy?orfi, G. Lugosi et al., Strategies for sequential prediction of stationary time series.
Springer, 2000.
9
| 4976 |@word mild:3 private:5 version:1 norm:1 dekel:1 seek:1 decomposition:8 mention:1 minus:1 tr:10 boundedness:1 reduction:3 initial:1 series:5 selecting:1 denoting:2 interestingly:1 past:1 existing:1 outperforms:1 current:1 comparing:1 si:1 attracted:1 conforming:1 must:2 realistic:1 enables:1 update:12 fund:1 alone:1 stationary:3 provides:4 unbounded:2 symposium:1 persistent:1 prove:1 doubly:1 salehi:1 manner:2 introduce:2 x0:1 upenn:2 hardness:1 indeed:1 expected:2 behavior:3 examine:1 decreasing:1 increasing:1 provided:2 estimating:2 underlying:14 bounded:4 formidable:1 moreover:2 notation:1 substantially:1 minimizes:1 developed:1 differing:1 finding:1 playground:1 friendly:1 tackle:1 axt:1 rm:8 control:4 unit:1 grant:1 ramanan:1 ozdaglar:1 maneuver:1 positive:4 engineering:2 local:6 dropped:1 sd:13 limit:2 despite:1 ak:1 lugosi:1 might:2 studied:3 suggests:2 co:2 range:1 gyorfi:1 averaged:2 acknowledgment:1 yj:1 regret:11 definite:4 area:1 orfi:2 boyd:1 word:1 pre:1 get:2 close:2 context:2 influence:4 a2t:2 bleakley:1 descending:1 equivalent:1 map:1 demonstrated:1 dean:1 attention:3 commence:1 independently:1 economics:1 ergodic:1 bachrach:1 simplicity:2 assigns:1 immediately:1 estimator:12 insight:1 rule:2 stability:1 notion:1 shamir:1 user:1 saber:1 losing:4 distinguishing:1 pa:1 satisfying:1 muri:1 cooperative:1 role:1 ft:1 preprint:2 electrical:1 capture:1 cycle:2 trade:2 decrease:1 observes:2 dynamic:6 existent:1 tight:4 ali:1 basis:1 easily:1 stock:1 distinct:1 fast:1 describe:1 effective:1 pertaining:1 neighborhood:2 choosing:1 whose:1 richer:1 valued:3 solve:1 ability:1 statistic:2 noisy:4 online:9 interplay:2 sequence:2 eigenvalue:2 chase:1 propose:1 subtracting:1 interaction:1 product:1 schoenebeck:1 riccati:2 achieve:1 adjoint:2 intuitive:2 convergence:3 sea:1 derive:1 develop:1 pose:1 rt2:1 measured:1 ij:2 noticeable:1 implies:1 pii:4 direction:2 radius:1 filter:2 stochastic:3 opinion:1 preliminary:1 proposition:11 hold:1 scope:1 substituting:2 vary:1 achieves:1 smallest:2 a2:36 estimation:6 largest:1 minimization:1 sensor:6 gaussian:1 always:2 aim:5 shahin:2 rather:1 reaching:1 ej:2 frongillo:1 varying:1 corollary:3 derived:3 she:2 improvement:2 vk:2 underscore:2 hk:5 sense:4 diminishing:1 her:6 hidden:1 dual:4 among:2 aforementioned:1 augment:1 wharton:1 equal:1 broad:1 employ:1 few:1 preserve:2 resulted:1 individual:13 pjj:3 consisting:1 maintain:1 interest:3 centralized:2 investigate:1 light:1 immense:1 edge:10 indexed:1 walk:4 earlier:1 cost:1 stacking:1 deviation:7 vertex:1 olfati:1 characterize:3 kn:3 varies:2 proximal:2 lkn:2 accomplish:1 unbiasedness:2 st:4 international:1 off:3 invoke:1 jadbabaie:4 discipline:1 together:1 connectivity:2 w1:2 cesa:1 choose:1 american:1 account:2 gy:2 star:2 availability:1 satisfy:3 stream:1 later:2 view:2 performed:1 root:1 analyze:1 observing:1 wm:1 recover:1 contribution:1 collaborative:2 square:8 ni:6 minimize:1 variance:5 biau:1 bayesian:2 thumb:1 drive:1 moura:2 definition:6 pp:14 dm:1 associated:2 proof:4 recovers:1 lim:11 bidirectional:1 follow:1 methodology:1 furthermore:2 until:1 hand:3 receives:2 tropp:1 ei:3 nonlinear:1 brings:2 quality:6 usa:1 effect:5 concept:1 true:7 unbiased:5 ccf:1 evolution:2 hence:2 assigned:1 counterpart:1 symmetric:1 during:1 self:6 game:2 steady:12 stress:1 complete:5 tt:4 theoretic:1 duchi:2 bring:1 fj:1 ranging:1 recently:2 fi:2 common:2 exponentially:1 discussed:2 association:1 tail:1 refer:1 cambridge:1 automatic:1 mathematics:1 gratefully:1 add:1 recent:1 perspective:1 optimizing:1 wellknown:1 scenario:2 certain:1 inequality:1 onr:1 arbitrarily:1 kar:2 yi:4 captured:1 greater:2 relaxed:1 converge:1 period:5 monotonically:2 signal:10 semi:4 ii:2 stem:1 technical:4 characterized:1 offer:1 long:2 msd:38 impact:3 prediction:10 basic:1 circumstance:1 essentially:1 arxiv:4 represent:1 gilad:1 agarwal:1 addition:3 want:1 wealth:1 singular:1 unlike:2 lcn:3 degroot:1 subject:1 tend:1 undirected:1 incorporates:1 spirit:1 noting:3 wn:2 variety:1 pennsylvania:1 restrict:2 economic:2 imperfect:2 idea:2 cn:1 simplifies:1 observability:1 expression:2 motivated:1 lsn:2 penalty:1 suffer:1 proceed:1 jj:2 remark:1 eigenvectors:1 nonparametric:2 outperform:1 nsf:1 track:2 per:3 vol:9 key:2 reliance:4 drawn:1 changing:3 clean:1 diffusion:2 v1:1 graph:7 asymptotically:2 year:1 sum:1 compete:1 run:1 fourth:1 communicate:3 place:1 throughout:1 family:1 decision:3 scaling:1 capturing:1 bound:13 internet:1 simplification:1 quadratic:4 lall:1 occur:1 constraint:3 mesbahi:1 generates:1 optimality:2 min:5 separable:1 conjecture:1 department:2 according:3 disconnected:3 unity:3 wi:3 evolves:2 making:4 s1:2 equation:4 turn:1 eventually:1 mechanism:3 letting:3 end:1 available:1 decentralized:1 observe:3 spectral:2 appropriate:1 batch:1 pji:1 existence:1 exploit:2 especially:2 establish:3 classical:1 objective:5 already:1 quantity:3 added:1 strategy:1 concentration:1 dependence:2 rt:8 exhibit:1 gradient:1 microscopic:1 detrimental:1 link:1 whom:1 consensus:6 ottucs:1 kalman:4 index:1 relationship:1 mini:1 ratio:5 minimizing:1 innovation:8 statement:4 potentially:1 negative:1 rise:1 collective:2 proper:1 perform:1 bianchi:1 upper:7 observation:14 vkt:1 finite:8 acknowledge:1 defining:2 maxk:1 communication:12 rn:2 arbitrary:1 complement:1 pair:1 cast:1 khan:1 connection:7 z1:1 address:1 beyond:1 dynamical:1 pattern:1 laplacians:1 sparsity:1 program:1 max:7 belief:7 wainwright:1 difficulty:2 regularized:2 predicting:1 nedic:1 representing:1 scheme:3 mossel:1 lk:2 philadelphia:1 sn:3 prior:1 geometric:3 literature:3 removal:1 kf:4 contributing:1 asymptotic:2 afosr:1 beside:2 loss:15 expect:1 highlight:2 fully:1 multiagent:1 interesting:1 filtering:2 versus:2 var:2 foundation:1 incurred:1 agent:35 pij:12 xiao:2 uncorrelated:1 maxt:1 allow:2 neighbor:8 wide:1 taking:3 distributed:13 feedback:1 dimension:1 world:8 author:2 made:1 employing:1 social:18 transaction:3 global:10 assumed:2 xi:1 spectrum:2 learn:4 nature:3 robust:1 zk:13 career:1 symmetry:1 contributes:1 purview:1 posing:1 motivation:1 whole:1 noise:8 bounding:1 suffering:1 tamuz:2 elaborate:1 sub:1 explicit:2 governed:1 communicates:3 theorem:7 removing:1 friendship:6 xt:10 specific:1 mitigates:1 rakhlin:2 r2:9 decay:1 virtue:1 fusion:1 sequential:5 adding:2 importance:1 magnitude:1 subtract:1 shahrampour:2 likely:1 forming:1 tracking:1 scalar:1 monotonic:1 springer:2 corresponds:1 truth:4 satisfies:3 relies:1 determines:1 goal:2 formulated:1 viewed:1 quantifying:1 price:1 considerable:1 change:10 reducing:1 averaging:5 lemma:3 called:2 lens:1 vote:1 indicating:1 quest:1 support:1 alexander:1 relevance:1 evaluate:1 princeton:1 |
4,393 | 4,977 | Modeling Overlapping Communities with
Node Popularities
Prem Gopalan1 , Chong Wang2 , and David M. Blei1
1
Department of Computer Science, Princeton University, {pgopalan,blei}@cs.princeton.edu
2
Machine Learning Department, Carnegie Mellon University, {chongw}@cs.cmu.edu
Abstract
We develop a probabilistic approach for accurate network modeling using node
popularities within the framework of the mixed-membership stochastic blockmodel (MMSB). Our model integrates two basic properties of nodes in social
networks: homophily and preferential connection to popular nodes. We develop a
scalable algorithm for posterior inference, based on a novel nonconjugate variant
of stochastic variational inference. We evaluate the link prediction accuracy of our
algorithm on nine real-world networks with up to 60,000 nodes, and on simulated
networks with degree distributions that follow a power law. We demonstrate that
the AMP predicts significantly better than the MMSB.
1
Introduction
Social network analysis is vital to understanding and predicting interactions between network entities [6, 19, 21]. Examples of such networks include online social networks, collaboration networks
and hyperlinked blogs. A central problem in social network analysis is to identify hidden community
structures and node properties that can best explain the network data and predict connections [19].
Two node properties underlie the most successful models that explain how network connections
are generated. The first property is popularity. This is the basis for preferential attachment [12],
according to which nodes preferentially connect to popular nodes. The resulting degree distributions
from this process are known to satisfy empirically observed properties such as power laws [24]. The
second property that underlies many network models is homophily or similarity, according to which
nodes with similar observed or unobserved attributes are more likely to connect to each other. To best
explain social network data, a probabilistic model must capture these competing node properties.
Recent theoretical work [24] has argued that optimizing the trade-offs between popularity and similarity best explains the evolution of many real networks. It is intuitive that combining both notions
of attractiveness, i.e., popularity and similarity, is essential to explain how networks are generated.
For example, on the Internet a user?s web page may link to another user due to a common interest in
skydiving. The same user?s page may also link to popular web pages such as Google.com.
In this paper, we develop a probabilistic model of networks that captures both popularity and homophily. To capture homophily, our model is built on the mixed-membership stochastic blockmodel
(MMSB) [2], a community detection model that allows nodes to belong to multiple communities.
(For example, a member of a large social network might belong to overlapping communities of
neighbors, co-workers, and school friends.) The MMSB provides better fits to real network data
than single community models [23, 27], but cannot account for node popularities.
Specifically, we extend the assortative MMSB [9] to incorporate per-community node popularity.
We develop a scalable algorithm for posterior inference, based on a novel nonconjugate variant of
stochastic variational inference [11]. We demonstrate that our model predicts significantly better
1
NETSCIENCE COLLABORATION NETWORK
POLITICAL BLOG NETWORK
AMP
AMP
MMSB
NEWMAN, M
JEONG, H
SOLE, R
BARABASI, A
PASTORSATORRAS, R
HOLME, P
Figure 1: We visualize the discovered community structure and node popularities in a giant component of the
netscience collaboration network [22] (Left). Each link denotes a collaboration between two authors, colored
by the posterior estimate of its community assignment. Each author node is sized by its estimated posterior
popularity and colored by its dominant research community. The network is visualized using the FructermanReingold algorithm [7]. Following [14], we show an example where incorporating node popularities helps in
accurately identifying communities (Right). The division of the political blog network [1] discovered by the
AMP corresponds closely to the liberal and conservative blogs identified in [1]; the MMSB has difficulty in
delineating these groups.
than the stochastic variational inference algorithm for the MMSB [9] on nine large real-world networks. Further, using simulated networks, we show that node popularities are essential for predictive
accuracy in the presence of power-law distributed node degrees.
Related work. There have been several research efforts to incorporate popularity into network models. Karrer et al. [14] proposed the degree-corrected blockmodel that extends the classic stochastic
blockmodels [23] to incorporate node popularities. Krivitsky et al. [16] proposed the latent cluster
random effects model that extends the latent space model [10] to include node popularities. Both
models capture node similarity and popularity, but assume that unobserved similarity arises from
each node participating in a single community. Finally, the Poisson community model [4] is a
probabilistic model of overlapping communities that implicitly captures degree-corrected mixedmemberships. However, the standard EM inference under this model drives many of the per-node
community parameters to zero, which makes it ineffective for prediction or model metrics based on
prediction (e.g., to select the number of communities).
2
Modeling node popularity and similarity
The assortative mixed-membership stochastic blockmodel (MMSB) [9] treats the links or non-links
yab of a network as arising from interactions between nodes a and b. Each node a is associated
with community memberships ?a , a distribution over communities. The probability that two nodes
are linked is governed by the similarity of their community memberships and the strength of their
shared communities.
Given the communities of a pair of nodes, the link indicators yab are independent. We draw yab repeatedly by choosing a community assignment (za?b , za?b ) for a pair of nodes (a, b), and drawing
a binary value from a community distribution. Specifically, the conditional probability of a link in
MMSB is
PK PK
p(yab = 1|za?b,i , za?b,j , ?) = i=1 j=1 za?b,i za?b,j ?ij ,
where ? is the blockmodel matrix of community strength parameters to be estimated. In the assortative MMSB [9], the non-diagonal entries of the blockmodel matrix are set close to 0. This
captures node similarity in community memberships?if two nodes are linked, it is likely that the
latent community indicators were the same.
2
In the proposed model, assortative MMSB with node popularities, or AMP, we introduce latent
variables ?a to capture the popularity of each node a, i.e., its propensity to attract links independent
of its community memberships. We capture the effect of node popularity and community similarity
on link generation using a logit model
PK k
logit (p(yab = 1|za?b , za?b , ?, ?)) ? ?a + ?b + k=1 ?ab
?k ,
(1)
k
k
where we define indicators ?ab
= za?b,k za?b,k . The indicator ?ab
is one if both nodes assume the
same community k.
Eq. 1 is a log-linear model [20]. In log-linear models, the random component, i.e., the expected
probability of a link, has a multiplicative dependency on the systematic components, i.e., the covariates. This model is also similar in the spirit of the random effects model [10]?the node-specific
PK k
effect ?a captures the popularity of individual nodes while the k=1 ?ab
?k term captures the interactions through latent communities. Notice that we can easily extend the predictor in Eq. 1 to
include observed node covariates, if any.
We now define a hierarchical generative process for the observed link or non-link under the AMP:
1. Draw K community strengths ?k ? N (?0 , ?02 ).
2. For each node a,
(a) Draw community memberships ?a ? Dirichlet(?).
(b) Draw popularity ?a ? N (0, ?12 ).
3. For each pair of nodes a and b,
(a) Draw interaction indicator za?b ? ?a .
(b) Draw interaction indicator za?b ? ?b .
(c) Draw the probability of a link yab |za?b , za?b , ?, ? ? logit?1 (za?b , za?b , ?, ?).
Under the AMP, the similarities between the nodes? community memberships and their respective
popularities compete to explain the observations.
We can make AMP simpler by replacing the vector of K latent community strengths ? with a
single community strength ?. In ?4, we demonstrate that this simpler model gives good predictive
performance on small networks.
We analyze data with the AMP via the posterior distribution over the latent variables
p(?1:N , ?1:N , z, ?1:K |y, ?, ?0 , ?02 , ?12 ), where ?1:N represents the node popularities, and the posterior over ?1:N represents the community memberships of the nodes. With an estimate of this
latent structure, we can characterize the network in many useful ways. Figure 1 gives an example.
This is a subgraph of the netscience collaboration network [22] with N = 1460 nodes. We analyzed
this network with K = 100 communities, using the algorithm from ?3. This results in posterior
estimates of the community memberships and popularities for each node and posterior estimates
of the community assignments for each link. With these estimates, we visualized the discovered
community structure and the popular authors.
In general, with an estimate of this latent structure, we can study individual links, characterizing
the extent to which they occur due to similarity between nodes and the extent to which they are an
artifact of the popularity of the nodes.
3
A stochastic gradient algorithm for nonconjugate variational inference
Our goal is to compute the posterior distribution p(?1:N , ?1:N , z, ?1:K |y, ?, ?0 ?02 , ?12 ). Exact inference is intractable; we use variational inference [13].
Traditionally, variational inference is a coordinate ascent algorithm. However, the AMP presents
two challenges. First, in variational inference the coordinate updates are available in closed form
only when all the nodes in the graphical model satisfy conditional conjugacy. The AMP is not conditionally conjugate. To see this, note that the Gaussian priors on the popularity ? and the community
strengths ? are not conjugate to the conditional likelihood of the data. Second, coordinate ascent
algorithms iterate over all the O(N 2 ) node pairs making inference intractable for large networks.
3
We address these challenges by deriving a stochastic gradient algorithm that optimizes a tractable
lower bound of the variational objective [11]. Our algorithm avoids the O(N 2 ) computational cost
per iteration by subsampling a ?mini-batch? of random nodes and a subset of their interactions in
each iteration [9].
3.1
The variational objective
In variational inference, we define a family of distributions over the hidden variables q(?, ?, ?, z)
and find the member of that family that is closest to the true posterior. We use the mean-field family,
with the following variational distributions:
q(za?b = i, za?b = j) = ?ij
ab ;
q(?k ) =
N (?k ; ?k , ??2 );
q(?n ) = Dirichlet(?n ; ?n );
q(?n ) = N (?n ; ?n , ??2 ).
(2)
The posterior over the joint distribution of link community assignments per node pair (a, b) is parameterized by the per-interaction memberships ?ab 1 , the community memberships by ?, the community strength distributions by ? and the popularity distributions by ?.
Minimizing the KL divergence between q and the true posterior is equivalent to optimizing an evidence lower bound (ELBO) L, a bound on the log likelihood of the observations. We obtain this
bound by applying Jensen?s inequality [13] to the data likelihood. The ELBO is
P
P
L = n Eq [log p(?n |?)] ? n Eq [log q(?n |?n )]
P
P
+ n Eq [log p(?n |?12 )] ? n Eq [log q(?n |?n , ??2 )]
P
P
+ k Eq [log p(?k |?0 , ?02 )] ? k Eq [log q(?k |?k , ??2 )]
P
+ a,b Eq [log p(za?b |?a )] + Eq [log p(za?b |?b )] ? Eq [log q(za?b , za?b |?ab )]
P
+ a,b Eq [log p(yab |za?b , za?b , ?, ?)].
(3)
Notice that the first three lines in Eq. 3 contains summations over communities and nodes; we call
these global terms. They relate to the global parameters which are (?, ?, ?). The remaining lines
contain summations over all node pairs; we call these local terms. They relate to the local parameters which are the ?ab . The distinction between the global and local parameters is important?the
updates to global parameters depends on all (or many) local parameters, while the updates to local parameters for a pair of nodes only depends on the relevant global and local parameters in that
context.
Estimating the global variational parameters is a challenging computational problem. Coordinate
ascent inference must consider each pair of nodes at each iteration, but even a single pass through
the O(N 2 ) node pairs can be prohibitive. Previous work [9] has taken advantage of conditional conjugacy of the MMSB to develop fast stochastic variational inference algorithms. Unlike the MMSB,
the AMP is not conditionally conjugate. Nevertheless, by carefully manipulating the variational
objective, we can develop a scalable stochastic variational inference algorithm for the AMP.
3.2
Lower bounding the variational objective
To optimize the ELBO with respect to the local and global parameters we need its derivatives. The
data likelihood terms in the ELBO can be written as
Eq [log p(yab |za?b , za?b , ?, ?)] = yab Eq [xab ] ? Eq [log(1 + exp(xab ))],
(4)
PK
k
where we define xab ? ?a + ?b + k=1 ?k ?ab
. The terms in Eq. 4 cannot be expanded analytically.
To address this issue, we further lower bound ?Eq [log(1+exp(xab ))] using Jensen?s inequality [13],
?Eq [log(1 + exp(xab ))] ? ? log[Eq (1 + exp(xab ))]
= ? log[1 + Eq [exp(?a + ?b +
PK
k=1
k
?k ?ab
)]]
= ? log[1 + exp(?a + ??2 /2) exp(?b + ??2 /2)sab ],
1
Following [15], we use a structured mean-field assumption.
4
(5)
Algorithm 1 The stochastic AMP algorithm
1: Initialize variational parameters. See ?3.5.
2: while convergence criteria is not met do
3:
Sample a mini-batch S of nodes. Let P be the set of node pairs in S.
4:
local step
5:
Optimize ?ab ?(a, b) ? P using Eq. 11 and Eq. 12.
6:
global step
7:
Update memberships ?a , for each node a ? S, using stochastic natural gradients in Eq. 6.
8:
Update popularities ?a , for each node a ? S using stochastic gradients in Eq. 7.
9:
Update community strengths ? using stochastic gradients in Eq. 9.
10:
Set ?a (t) = (?0 + ta )?? ; ta ? ta + 1, for each node a ? S.
11:
Set ?0 (t) = (?0 + t)?? ; t ? t + 1.
12: end while
PK
PK
kk
2
kk
where we define sab ?
k=1 ?ab exp{?k + ?? /2} + (1 ?
k=1 ?ab ). In simplifying Eq. 5,
we have used that q(?n ) is a Gaussian. Using the mean of a log-normal distribution, we have
Eq [exp(?n )] = exp(?n + ??2 /2). A similar substitution applies for the terms involving ?k in Eq. 5.
We substitute Eq. 5 in Eq. 3 to obtain a tractable lower bound L0 of the ELBO L in Eq. 3. This allows
us to develop a coordinate ascent algorithm that iteratively updates the local and global parameters
to optimize this lower bound on the ELBO.
3.3
The global step
We optimize the ELBO with respect to the global variational parameters using stochastic gradient
ascent. Stochastic gradient algorithms follow noisy estimates of the gradient with a decreasing stepsize. If the expectation of the noisy gradient equals to the gradient and if the step-size decreases
according to a certain schedule, then the algorithm converges to a local optimum [26]. Subsampling
the data to form noisy gradients scales inference as we avoid the expensive all-pairs sums in Eq. 3.
The global step updates the global community memberships ?, the global popularity parameters ?
and the global community strength parameters ? with a stochastic gradient of the lower bound on
the ELBO L0 . In [9], the authors update community memberships of all nodes after each iteration
by obtaining the natural gradients of the ELBO 2 with respect to the vector ? of dimension N ? K.
We use natural gradients for the memberships too, but use distinct stochastic optimizations for the
memberships and popularity parameters of each node and maintain a separate learning rate for each
node. This restricts the per-iteration updates to nodes in the current mini-batch.
Since the variational objective is a sum of terms, we can cheaply compute a stochastic gradient by
first subsampling a subset of terms and then forming an appropriately scaled gradient. We use a
variant of the random node sampling method proposed in [9]. At each iteration we sample a node
uniformly at random from the N nodes in the network. (In practice we sample a ?mini-batch? S of
nodes per update to reduce noise [11, 9].) While a naive method will include all interactions of a
sampled node as the observed pairs, we can leverage network sparsity for efficiency; in many real
networks, only a small fraction of the node pairs are linked. Therefore, for each sampled node, we
include as observations all of its links and a small uniform sample of m0 non-links.
Let ??at be the natural gradient of L0 with respect to ?a , and ??ta and ??tk be the gradients of L0
with respect to ?a and ?k , respectively. Following [2, 9], we have
P
P
t?1
t
kk
= ??a,k
(6)
??a,k
+ ?k + (a,b)?links(a) ?kk
ab (t) +
(a,b)?nonlinks(a) ?ab (t),
where links(a) and nonlinks(a) correspond to the set of links and non-links of a in the training set.
Notice that an unbiased estimate of the summation term over non-links in Eq. 6 can be obtained from
a subsample of the node?s non-links. Therefore, the gradient of L0 with respect to the membership
parameter ?a , computed using all of the nodes? links and a subsample of its non-links, is a noisy but
unbiased estimate of the natural gradient in Eq. 6.
2
The natural gradient [3] points in the direction of steepest ascent in the Riemannian space. The local
distance in the Riemannian space is defined by KL divergence, a better measure of dissimilarity between probability distributions than Euclidean distance [11].
5
The gradient of the approximate ELBO with respect to the popularity parameter ?a is
P
?t?1
??ta = ? ?a 2 + (a,b)?links(a) ? nonlinks(a) (yab ? rab sab ),
(7)
1
where we define rab as
rab ?
exp{?a +??2 /2} exp{?b +??2 /2}
.
1+exp{?a +??2 /2} exp{?b +??2 /2}sab
(8)
Finally, the stochastic gradient of L0 with respect to the global community strength parameter ?k is
??tk =
t?1
?0 ??k
?02
+
N
2|S|
P
(a,b)?links(S) ? nonlinks(S)
2
?kk
ab (yab ? rab exp{?k + ?? /2}).
(9)
As with the community membership gradients, notice that an unbiased estimate of the summation
term over non-links in Eq. 7 and Eq. 9 can be obtained from a subsample of the node?s non-links. To
obtain an unbiased estimate of the true gradient with respect to ?k , the summation over a node?s links
and non-links must be scaled by the inverse probability of subsampling that node in Eq. 9. Since
each pair is shared between two nodes, and we use a mini-batch with S nodes, the summations over
N
the node pairs are scaled by 2|S|
in Eq. 9.
We can interpret the gradients in Eq. 7 and Eq. 9 by studying the terms involving rab in Eq. 7 and
Eq. 9. In Eq. 7, (yab ? rab sab ) is the residual for the pair (a, b), while in Eq. 9, (yab ? rab exp{?k +
??2 /2}) is the residual for the pair (a, b) conditional on the latent community assignment for both
nodes a and b being set to k. Further, notice that the updates for the global parameters of node a and
b, and the updates for ? depend only on the diagonal entries of the indicator variational matrix ?ab .
We can similarly obtain stochastic gradients for the variational variances ?? and ?? ; however, in our
experiments we found that fixing them already gives good results. (See ?4.)
The global step for the global parameters follows the noisy gradient with an appropriate step-size:
?a ? ?a + ?a (t)??at ;
?a ? ?a + ?a (t)??ta ;
? ? ? + ?0 (t)??t .
(10)
We maintain separate learning rates ?a for each node a, and only update the ? and ? for the nodes
in the mini-batch in each iteration. There is a global learning rate ?0 for the community strength
parameters
?, which are updated
in every iteration. For each of these learning rates ?, we require
P
P
that t ?(t)2 < ? and t ?(t) = ? for convergence to a local optimum [26]. We set ?(t) ,
(?0 + t)?? , where ? ? (0.5, 1] is the learning rate and ?0 ? 0 downweights early iterations.
3.4
The local step
We now derive the updates for the local parameters. The local step optimizes the per-interaction
memberships ? with respect to a subsample of the network. There is a per-interaction variational
parameter of dimension K ? K for each node pair??ab ?representing the posterior approximation
of which pair of communities are active in determining the link or non-link. The coordinate ascent
update for ?ab is
n
o
2
?kk
ab ? exp Eq [log ?a,k ] + Eq [log ?b,k ] + yab ?k ? rab (exp{?k + ?? /2} ? 1)
n
o
?ij
ab ? exp Eq [log ?a,i ] + Eq [log ?b,j ] , i 6= j,
(11)
(12)
where rab is defined in Eq. 8. We present the full stochastic variational inference in Algorithm 1.
3.5
Initialization and convergence
We initialize the community memberships ? using approximate posterior memberships from the
variational inference algorithm for the MMSB [9]. We initialized popularities ? to the logarithm of
the normalized node degrees added to a small random offset, and initialized the strengths ? to zero.
We measure convergence by computing the link prediction accuracy on a validation set with 1% of
the networks links, and an equal number of non-links. The algorithm stops either when the change
in log-likelihood on this validation set is less than 0.0001%, or if the log-likelihood decreases for
consecutive iterations.
6
Figure 2: Network data sets. N is the number of nodes, d is the percent of node pairs that are links and P is
the mean perplexity over the links and nonlinks in the held-out test set.
N
d(%)
PAMP
PMMSB
712
1224
1450
4158
8638
11204
17903
36458
56739
1.7%
1.9%
0.2%
0.1%
0.05%
0.16%
0.11%
0.02%
0.01%
2.75 ? 0.04
2.97 ? 0.03
2.73 ? 0.11
3.69 ? 0.18
12.35 ? 0.17
2.75 ? 0.06
5.04 ? 0.02
10.82 ? 0.09
10.98 ? 0.39
3.41 ? 0.15
3.12 ? 0.01
3.02 ? 0.19
6.53 ? 0.37
23.06 ? 0.87
3.310 ? 0.15
5.28 ? 0.07
13.52 ? 0.21
41.11 ? 0.89
DATA SET
US AIR
POLITICAL BLOGS
NETSCIENCE
RELATIVITY
HEP - TH
HEP - PH
ASTRO - PH
COND - MAT
BRIGHTKITE
Mean precision
relativity
astro
6%
5%
4%
3%
2%
1%
0%
8%
7%
6%
5%
4%
3%
2%
10 50 100
hepph
12%
10%
8%
6%
4%
10 50 100
hepth
2%
1.5%
1%
0.5%
0%
10 50 100
TYPE
TRANSPORT
HYPERLINK
COLLAB .
COLLAB .
COLLAB .
COLLAB .
COLLAB .
COLLAB .
SOCIAL
cond?mat
[25]
[1]
[22]
[18]
[18]
[18]
[18]
[22]
[18]
brightkite
2%
1.5%
1%
0.5%
0%
0.8%
0.6%
0.4%
0.2%
0%
10 50 100
SOURCE
10 50 100
amp
mmsb
10 50 100
Number of recommendations
Mean recall
relativity
30%
25%
20%
15%
10%
astro
hepph
20%
15%
15%
10%
10%
5%
5%
0%
10 50 100
10 50 100
hepth
20%
15%
10%
5%
0%
10 50 100
cond?mat
12%
10%
8%
6%
4%
2%
0%
10 50 100
brightkite
14%
12%
10%
8%
6%
4%
2%
0%
10 50 100
amp
mmsb
10 50 100
Number of recommendations
Figure 3: The AMP model outperforms the MMSB model of [9] in predictive accuracy on real networks. Both
models were fit using stochastic variational inference [11]. For the data sets shown, the number of communities
K was set to 100 and hyperparameters were set to the same values across data sets. The perplexity results are
based on five replications. A single replication is shown for the mean precision and mean recall.
4
Empirical study
We use the predictive approach to evaluating model fitness [8], comparing the predictive accuracy
of AMP (Algorithm 1) to the stochastic variational inference algorithm for the MMSB with link
sampling [9]. In all data sets, we found that AMP gave better fits to real-world networks. Our
networks range in size from 712 nodes to 56,739 nodes. Some networks are sparse, having as
little as 0.01% of all pairs as links, while others have up to 2% of all pairs as links. Our data sets
contain four types of networks: hyperlink, transportation, collaboration and social networks. We
implemented Algorithm 1 in 4,800 lines of C++ code. 3
Metrics. We used perplexity, mean precision and mean recall in our experiments to evaluate the
predictive accuracy of the algorithms. We computed the link prediction accuracy using a test set of
node pairs that are not observed during training. The test set consists of 10% of randomly selected
links and non-links from each data set. During training, these test set observations are treated as
zeros. We approximate the predictive distribution of a held-out node pair yab under the AMP using
? ?? and ?
posterior estimates ?,
? as
P
P
? ?)p(z
?
p(yab |y) ? za?b za?b p(yab |za?b , za?b , ?,
?a )p(za?b |?
?b ).
(13)
a?b |?
3
Our software is available at https://github.com/premgopalan/sviamp.
7
Mean precision
mu 0
mu 0.2
mu 0.4
0.028
0.026
0.024
0.022
0.020
0.018
0.016
0.028
0.026
0.024
0.022
0.020
0.018
0.035
0.030
0.025
0.020
2
4
6
8
2
4
6
8
amp
mmsb
2
4
6
8
Ratio of max. degree to avg. degree
Figure 4: The AMP predicts significantly better than the MMSB [9] on 12 LFR benchmark networks [17].
Each plot shows 4 networks with increasing right-skewness in degree distribution. ? is the fraction of noisy
links between dissimilar nodes?nodes that share no communities. The precision is computed at 50 recommendations for each node, and is averaged over all nodes in the network.
Perplexity is the exponential of the average predictive log likelihood of the held-out node pairs. For
mean precision and recall, we generate the top n pairs for each node ranked by the probability of
a link between them. The ranked list of pairs for each node includes nodes in the test set, as well
as nodes in the training set that were non-links. We compute precision-at-m, which measures the
fraction of the top m recommendations present in the test set; and we compute recall-at-m, which
captures the fraction of nodes in the test set present in the top m recommendations. We vary m from
10 to 100. We then obtain the mean precision and recall across all nodes. 4
Hyperparameters and constants. For the stochastic AMP algorithm, we set the ?mini-batch? size
S = N/100, where N is the number of nodes in the network and we set the non-link sample size
m0 = 100. We set the number of communities K = 2 for the political blog network and K = 20
for the US air; for all other networks, K was set to 100. We set the hyperparameters ?02 = 1.0,
?12 = 10.0 and ?0 = 0, fixed the variational variances at ?? = 0.1 and ?? = 0.5 and set the learning
1
parameters ?0 = 65536 and ? = 0.5. We set the Dirichlet hyperparameter ? = K
for the AMP and
the MMSB.
Results on real networks. Figure 2 compares the AMP and the MMSB stochastic algorithms on a
number of real data sets. The AMP definitively outperforms the MMSB in predictive performance.
All hyperparameter settings were held fixed across data sets. The first four networks are small in
size, and were fit using the AMP model with a single community strength parameter. All other
networks were fit with the AMP model with K community strength parameters. As N increases,
the gap between the mean precision and mean recall performance of these algorithms appears to
increase. Without node popularities, MMSB is dependent entirely on node memberships and community strengths to predict links. Since K is held fixed, communities are likely to have more nodes
as N increases, making it increasingly difficult for the MMSB to predict links. For the small US air,
political blogs and netscience data sets, we obtained similar performance for the replication shown
in Figure 2. For the AMP the mean precision at 10 for US Air, political blogs and netscience were
0.087, 0.07, 0.092, respectively; for the MMSB the corresponding values were 0.007, 0.0, 0.063,
respectively.
Results on synthetic networks. We generated 12 LFR benchmark networks [17], each with 1000
nodes. Roughly 50% of the nodes were assigned to 4 overlapping communities, and the other 50%
were assigned to single communities. We set a community size range of [200, 500] and a mean node
degree of 10 with power-law exponent set to 2.0. Figure 4 shows that the MMSB performs poorly as
the skewness is increased, while the AMP performs significantly better in the presence of both noisy
links and right-skewness, both characteristics of real networks. The skewness in degree distributions
causes the community strength parameters of MMSB to overestimate or underestimate the linking
patterns within communities. The per-node popularities in the AMP can capture the heterogeneity
in node degrees, while learning the corrected community strengths.
Acknowledgments
David M. Blei is supported by ONR N00014-11-1-0651, NSF CAREER IIS-0745520, and
the Alfred P. Sloan foundation. Chong Wang is supported by NSF DBI-0546594 and NIH
1R01GM093156.
4
Precision and recall are better metrics than ROC AUC on highly skewed data sets [5].
8
References
[1] L. A. Adamic and N. Glance. The political blogosphere and the 2004 U.S. election: divided they blog. In
LinkKDD, LinkKDD ?05, page 3643, New York, NY, USA, 2005. ACM.
[2] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. Mixed membership stochastic blockmodels. J.
Mach. Learn. Res., 9:1981?2014, June 2008.
[3] S. Amari. Differential geometry of curved exponential Families-Curvatures and information loss. The
Annals of Statistics, 10(2):357?385, June 1982.
[4] B. Ball, B. Karrer, and M. E. J. Newman. Efficient and principled method for detecting communities in
networks. Physical Review E, 84(3):036103, Sept. 2011.
[5] J. Davis and M. Goadrich. The relationship between precision-recall and ROC curves. In Proceedings
of the 23rd international conference on Machine learning, ICML ?06, pages 233?240, New York, NY,
USA, 2006. ACM.
[6] S. Fortunato. Community detection in graphs. Physics Reports, 486(35):75?174, Feb. 2010.
[7] T. M. J. Fruchterman and E. M. Reingold. Graph drawing by force-directed placement. Softw. Pract.
Exper., 21(11):1129?1164, Nov. 1991.
[8] S. Geisser and W. Eddy. A predictive approach to model selection. Journal of the American Statistical
Association, 74:153?160, 1979.
[9] P. K. Gopalan and D. M. Blei. Efficient discovery of overlapping communities in massive networks.
Proceedings of the National Academy of Sciences, 110(36):14534?14539, 2013.
[10] P. Hoff, A. Raftery, and M. Handcock. Latent space approaches to social network analysis. Journal of the
American Statistical Association, 97(460):1090?1098, 2002.
[11] M. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 2013.
[12] H. Jeong, Z. Nda, and A. L. Barabsi. Measuring preferential attachment in evolving networks. EPL
(Europhysics Letters), 61(4):567, 2003.
[13] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Mach. Learn., 37(2):183?233, Nov. 1999.
[14] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys.
Rev. E, 83:016107, Jan 2011.
[15] D. I. Kim, P. Gopalan, D. M. Blei, and E. B. Sudderth. Efficient online inference for bayesian nonparametric relational models. In Neural Information Processing Systems, 2013.
[16] P. N. Krivitsky, M. S. Handcock, A. E. Raftery, and P. D. Hoff. Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models. Social Networks,
31(3):204?213, July 2009.
[17] A. Lancichinetti and S. Fortunato. Benchmarks for testing community detection algorithms on directed
and weighted graphs with overlapping communities. Physical Review E, 80(1):016118, July 2009.
[18] J. Leskovec, K. J. Lang, A. Dasgupta, and M. W. Mahone. Community structure in large networks:
Natural cluster sizes and the absence of large well-defined cluster. In Internet Mathematics, 2008.
[19] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In Proceedings of the
twelfth international conference on Information and knowledge management, CIKM ?03, pages 556?559,
New York, NY, USA, 2003. ACM.
[20] P. McCullagh and J. A. Nelder. Generalized Linear Models, Second Edition. Chapman and Hall/CRC, 2
edition, Aug. 1989.
[21] M. E. J. Newman. Assortative mixing in networks. Physical Review Letters, 89(20):208701, Oct. 2002.
[22] M. E. J. Newman. Finding community structure in networks using the eigenvectors of matrices. Physical
Review E, 74(3):036104, 2006.
[23] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of
the American Statistical Association, 96(455):1077?1087, Sept. 2001.
[24] F. Papadopoulos, M. Kitsak, M. . Serrano, M. Bogu, and D. Krioukov. Popularity versus similarity in
growing networks. Nature, 489(7417):537?540, Sept. 2012.
[25] RITA. U.S. Air Carrier Traffic Statistics, Bur. Trans. Stats, 2010.
[26] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics,
22(3):400?407, Sept. 1951.
[27] Y. J. Wang and G. Y. Wong. Stochastic blockmodels for directed graphs. Journal of the American
Statistical Association, 82(397):8?19, 1987.
9
| 4977 |@word logit:3 twelfth:1 simplifying:1 substitution:1 contains:1 amp:31 outperforms:2 current:1 com:2 comparing:1 lang:1 must:3 written:1 plot:1 update:16 generative:1 prohibitive:1 selected:1 steepest:1 papadopoulos:1 colored:2 blei:6 provides:1 detecting:1 node:113 liberal:1 simpler:2 five:1 mathematical:1 differential:1 replication:3 consists:1 lancichinetti:1 introduce:1 bur:1 expected:1 roughly:1 growing:1 decreasing:1 linkkdd:2 little:1 election:1 increasing:1 estimating:1 skewness:4 r01gm093156:1 unobserved:2 giant:1 finding:1 every:1 delineating:1 scaled:3 hepth:2 underlie:1 overestimate:1 carrier:1 local:15 treat:1 mach:2 might:1 initialization:1 challenging:1 co:1 range:2 averaged:1 directed:3 acknowledgment:1 testing:1 pgopalan:1 chongw:1 assortative:5 practice:1 jan:1 empirical:1 evolving:1 significantly:4 cannot:2 close:1 selection:1 context:1 applying:1 wong:1 optimize:4 equivalent:1 transportation:1 identifying:1 stats:1 dbi:1 deriving:1 goadrich:1 classic:1 notion:1 traditionally:1 coordinate:6 updated:1 annals:2 user:3 exact:1 massive:1 rita:1 expensive:1 predicts:3 observed:6 wang:3 capture:12 trade:1 decrease:2 liben:1 principled:1 mu:3 covariates:2 depend:1 predictive:10 division:1 efficiency:1 basis:1 easily:1 joint:1 distinct:1 fast:1 newman:5 choosing:1 drawing:2 elbo:10 amari:1 statistic:3 noisy:7 online:2 advantage:1 hyperlinked:1 interaction:10 serrano:1 relevant:1 combining:1 subgraph:1 poorly:1 mixing:1 academy:1 intuitive:1 participating:1 convergence:4 cluster:4 optimum:2 converges:1 tk:2 help:1 derive:1 develop:7 friend:1 fixing:1 ij:3 school:1 sole:1 aug:1 eq:50 implemented:1 c:2 met:1 direction:1 closely:1 attribute:1 stochastic:33 explains:1 argued:1 require:1 crc:1 hepph:2 summation:6 nda:1 hall:1 normal:1 exp:19 predict:3 visualize:1 m0:2 vary:1 early:1 barabasi:1 consecutive:1 nowell:1 estimation:1 integrates:1 propensity:1 robbins:1 weighted:1 hoffman:1 offs:1 gaussian:2 sab:5 avoid:1 jaakkola:1 l0:6 june:2 likelihood:7 political:7 blockmodel:6 kim:1 inference:23 dependent:1 membership:25 attract:1 hidden:2 manipulating:1 issue:1 exponent:1 initialize:2 hoff:2 field:2 equal:2 having:1 sampling:2 softw:1 chapman:1 represents:2 geisser:1 icml:1 others:1 report:1 randomly:1 divergence:2 national:1 individual:2 fitness:1 geometry:1 maintain:2 ab:21 detection:3 interest:1 highly:1 chong:2 analyzed:1 held:5 accurate:1 worker:1 preferential:3 respective:1 euclidean:1 logarithm:1 initialized:2 re:1 theoretical:1 epl:1 leskovec:1 increased:1 modeling:3 hep:2 measuring:1 assignment:5 karrer:3 cost:1 entry:2 subset:2 predictor:1 uniform:1 successful:1 too:1 characterize:1 connect:2 dependency:1 synthetic:1 international:2 probabilistic:4 systematic:1 physic:1 central:1 management:1 american:4 derivative:1 account:1 includes:1 satisfy:2 sloan:1 depends:2 multiplicative:1 closed:1 linked:3 analyze:1 traffic:1 xing:1 monro:1 air:5 accuracy:7 variance:2 characteristic:1 correspond:1 identify:1 bayesian:1 accurately:1 drive:1 za:31 explain:5 phys:1 underestimate:1 associated:1 riemannian:2 sampled:2 stop:1 popular:4 recall:9 knowledge:1 eddy:1 schedule:1 carefully:1 appears:1 ta:6 follow:2 nonconjugate:3 web:2 replacing:1 transport:1 adamic:1 overlapping:6 google:1 glance:1 rab:9 artifact:1 usa:3 effect:5 contain:2 true:3 unbiased:4 normalized:1 evolution:1 analytically:1 assigned:2 iteratively:1 nowicki:1 conditionally:2 during:2 skewed:1 auc:1 davis:1 criterion:1 generalized:1 demonstrate:3 performs:2 percent:1 variational:29 novel:2 nih:1 common:1 empirically:1 physical:4 homophily:5 belong:2 extend:2 linking:1 association:4 interpret:1 mellon:1 paisley:1 rd:1 mathematics:1 similarly:1 handcock:2 similarity:12 feb:1 dominant:1 curvature:1 posterior:15 closest:1 recent:1 optimizing:2 optimizes:2 perplexity:4 certain:1 relativity:3 collab:6 inequality:2 blog:9 binary:1 onr:1 n00014:1 krioukov:1 july:2 ii:1 multiple:1 full:1 snijders:1 downweights:1 divided:1 europhysics:1 prediction:7 scalable:3 basic:1 variant:3 underlies:1 involving:2 cmu:1 poisson:1 metric:3 expectation:1 iteration:10 sudderth:1 source:1 appropriately:1 unlike:1 ineffective:1 ascent:7 member:2 reingold:1 spirit:1 jordan:1 call:2 presence:2 leverage:1 vital:1 iterate:1 fit:5 gave:1 competing:1 identified:1 reduce:1 effort:1 york:3 nine:2 cause:1 repeatedly:1 useful:1 gopalan:2 eigenvectors:1 nonparametric:1 yab:17 ph:2 visualized:2 http:1 generate:1 restricts:1 nsf:2 notice:5 estimated:2 arising:1 popularity:36 per:10 cikm:1 alfred:1 carnegie:1 hyperparameter:2 mat:3 dasgupta:1 group:1 four:2 nevertheless:1 graph:4 fraction:4 sum:2 compete:1 inverse:1 parameterized:1 letter:2 extends:2 family:4 blockstructures:1 draw:7 entirely:1 bound:8 internet:2 strength:17 occur:1 placement:1 software:1 kleinberg:1 expanded:1 department:2 structured:1 according:3 ball:1 conjugate:3 across:3 em:1 increasingly:1 rev:1 making:2 taken:1 fienberg:1 conjugacy:2 tractable:2 end:1 studying:1 available:2 hierarchical:1 appropriate:1 stepsize:1 batch:7 wang2:1 substitute:1 denotes:1 dirichlet:3 include:5 subsampling:4 remaining:1 graphical:2 top:3 clustering:1 ghahramani:1 objective:5 already:1 added:1 diagonal:2 gradient:28 distance:2 link:54 separate:2 simulated:2 entity:1 astro:3 extent:2 code:1 relationship:1 mini:7 kk:6 minimizing:1 preferentially:1 ratio:1 difficult:1 relate:2 fortunato:2 observation:4 benchmark:3 curved:1 heterogeneity:1 relational:1 discovered:3 community:76 david:2 pair:27 kl:2 lfr:2 connection:3 jeong:2 distinction:1 trans:1 address:2 pattern:1 sparsity:1 challenge:2 hyperlink:2 built:1 max:1 bogu:1 power:4 difficulty:1 natural:7 treated:1 predicting:1 indicator:7 ranked:2 residual:2 force:1 representing:2 github:1 attachment:2 raftery:2 naive:1 sept:4 prior:1 understanding:1 review:4 discovery:1 determining:1 law:4 netscience:6 loss:1 mixed:4 generation:1 versus:1 validation:2 foundation:1 degree:13 share:1 collaboration:6 supported:2 brightkite:3 neighbor:1 saul:1 characterizing:1 definitively:1 sparse:1 distributed:1 curve:1 dimension:2 world:3 avoids:1 evaluating:1 author:4 avg:1 social:12 mmsb:29 approximate:3 nov:2 implicitly:1 global:20 active:1 nelder:1 latent:12 learn:2 nature:1 career:1 exper:1 obtaining:1 blockmodels:4 pk:8 xab:6 bounding:1 noise:1 subsample:4 hyperparameters:3 edition:2 attractiveness:1 roc:2 ny:3 precision:12 exponential:2 governed:1 specific:1 jensen:2 offset:1 list:1 evidence:1 essential:2 incorporating:1 intractable:2 airoldi:1 dissimilarity:1 gap:1 likely:3 forming:1 cheaply:1 blogosphere:1 recommendation:5 applies:1 corresponds:1 acm:3 oct:1 conditional:5 sized:1 goal:1 krivitsky:2 shared:2 absence:1 change:1 mccullagh:1 specifically:2 corrected:3 uniformly:1 conservative:1 pas:1 cond:3 select:1 arises:1 dissimilar:1 prem:1 incorporate:3 evaluate:2 princeton:2 |
4,394 | 4,978 | A Scalable Approach to Probabilistic Latent Space
Inference of Large-Scale Networks
Junming Yin
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Qirong Ho
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Eric P. Xing
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We propose a scalable approach for making inference about latent spaces of large
networks. With a succinct representation of networks as a bag of triangular motifs,
a parsimonious statistical model, and an efficient stochastic variational inference
algorithm, we are able to analyze real networks with over a million vertices and
hundreds of latent roles on a single machine in a matter of hours, a setting that is
out of reach for many existing methods. When compared to the state-of-the-art
probabilistic approaches, our method is several orders of magnitude faster, with
competitive or improved accuracy for latent space recovery and link prediction.
1
Introduction
In the context of network analysis, a latent space refers to a space of unobserved latent representations of individual entities (i.e., topics, roles, or simply embeddings, depending on how users
would interpret them) that govern the potential patterns of network relations. The problem of latent
space inference amounts to learning the bases of such a space and reducing the high-dimensional
network data to such a lower-dimensional space, in which each entity has a position vector. Depending on model semantics, the position vectors can be used for diverse tasks such as community
detection [1, 5], user personalization [4, 13], link prediction [14] and exploratory analysis [9, 19, 8].
However, scalability is a key challenge for many existing probabilistic methods, as even recent stateof-the-art methods [5, 8] still require days to process modest networks of around 100, 000 nodes.
To perform latent space analysis on at least million-node (if not larger) real social networks with
many distinct latent roles [24], one must design inferential mechanisms that scale in both the number
of vertices N and the number of latent roles K. In this paper, we argue that the following three
principles are crucial for successful large-scale inference: (1) succinct but informative representation
of networks; (2) parsimonious statistical modeling; (3) scalable and parallel inference algorithms.
Existing approaches [1, 5, 7, 8, 14] are limited in that they consider only one or two of the above
principles, and therefore can not simultaneously achieve scalability and sufficient accuracy. For
example, the mixed-membership stochastic blockmodel (MMSB) [1] is a probabilistic latent space
model for edge representation of networks. Its batch variational inference algorithm has O(N 2 K 2 )
time complexity and hence cannot be scaled to large networks. The a-MMSB [5] improves upon
MMSB by applying principles (2) and (3): it reduces the dimension of the parameter space from
O(K 2 ) to O(K), and applies a stochastic variational algorithm for fast inference. Fundamentally,
however, the a-MMSB still depends on the O(N 2 ) adjacency matrix representation of networks,
just like the MMSB. The a-MMSB inference algorithm mitigates this issue by downsampling zero
elements in the matrix, but is still not fast enough to handle networks with N ? 100, 000.
But looking beyond the edge-based relations and features, other higher-order structural statistics
(such as the counts of triangles and k-stars) are also widely used to represent the probability distribution over the space of networks, and are viewed as crucial elements in building a good-fitting
exponential random graph model (ERGM) [11]. These higher-order relations have motivated the
development of the triangular representation of networks [8], in which each network is represented
succinctly as a bag of triangular motifs with size typically much smaller than ?(N 2 ). This succinct representation has proven effective in extracting informative mixed-membership roles from
1
networks with high fidelity, thus achieving the first principle (1). However, the corresponding statistical model, called the mixed-membership triangular model (MMTM), only scales well against the
size of a network, but does not scale to large numbers of latent roles (i.e., dimension of the latent
space). To be precise, if there are K distinct latent roles, its tensor of triangle-generating parameters is of size O(K 3 ), and its blocked Gibbs sampler requires O(K 3 ) time per iteration. Our own
experiments show that the MMTM Gibbs algorithm is unusable for K > 10.
We now present a scalable approach to both latent space modeling and inference algorithm design
that encompasses all three aforementioned principles for large networks. Specifically, we build our
approach on the bag-of-triangles representation of networks [8] and apply principles (2) and (3),
yielding a fast inference procedure that has time complexity O(N K). In Section 3, we propose the
parsimonious triangular model (PTM), in which the dimension of the triangle-generating parameters
only grows linearly in K. The dramatic reduction is principally achieved by sharing parameters
among certain groups of latent roles. Then, in Section 4, we develop a fast stochastic natural gradient
ascent algorithm for performing variational inference, where an unbiased estimate of the natural
gradient is obtained by subsampling a ?mini-batch? of triangular motifs. Instead of adopting a fully
factorized, naive mean-field approximation, which we find performs poorly in practice, we pursue
a structured mean-field approach that captures higher-order dependencies between latent variables.
These new developments all combine to yield an efficient inference algorithm that usually converges
after 2 passes on each triangular motif (or up to 4-5 passes at worst), and achieves competitive or
improved accuracy for latent space recovery and link prediction on synthetic and real networks.
Finally, in Section 5, we demonstrate that our algorithm converges and infers a 100-role latent space
on a 1M-node Youtube social network in just 4 hours, using a single machine with 8 threads.
2
Triangular Representation of Networks
We take a scalable approach to network modeling by representing each network succinctly as a
bag of triangular motifs [8]. Each triangular motif is a connected subgraph over a vertex triple
containing 2 or 3 edges (called open triangle and closed triangle respectively). Empty and singleedge triples are ignored. Although this triangular format does not preserve all network information
found in an edge representation, these three-node connected subgraphs are able to capture a number
of informative structural features in the network. For example, in social network theory, the notion of
triadic closure [21, 6] is commonly measured by the relative number of closed triangles compared to
the total number of connected triples, known as the global clustering coefficient or transitivity [17].
The same quantity is treated as a general network statistic in the exponential random graph model
(ERGM) literature [16]. Furthermore, the most significant and recurrent structural patterns in many
complex networks, so-called ?network motifs?, turn out to be connected three-node subgraphs [15].
Most importantly of all, triangular modeling requires much less computational cost compared to
edge-based models, with little or no degradation of performance for latent space recovery [8]. In
networks with N vertices and low maximum vertex degree D, the number of triangular motifs
?(N D2 ) is normally much smaller than ?(N 2 ), allowing us to construct more efficient inference
algorithms scalable to larger networks. For high-maximum-degree networks, the triangular motifs
can be subsampled in a node-centric fashion as a local data reduction
step. For each vertex i with
degree higher than a user-chosen threshold ?, uniformly sample 2? triangles from the set composed
of (a) its adjacent closed triangles, and (b) its adjacent open triangles that are centered on i. Vertices
with degree ? ? keep all triangles from their set. It has been shown that this ?-subsampling procedure can approximately preserve the distribution over open and closed triangles, and allows for
much faster inference algorithms (linear growth in N) at a small cost in accuracy [8].
In what follows, we assume that a preprocessing step has been performed ? namely, extracting and
?-subsampling triangular motifs (which can be done in O(1) time per sample, and requires < 1% of
the actual inference time) ? to yield a bag-of-triangles representation of the input network. For each
triplet of vertices i, j, k ? {1, . . . , N } , i < j < k, let Eijk denote the observed type of triangular
motif formed among these three vertices: Eijk = 1, 2 and 3 represent an open triangle with i, j
and k in the center respectively, and Eijk = 4 if a closed triangle is formed. Because empty and
single-edge triples are discarded, the set of triples with triangular motifs formed, I = {(i, j, k) : i <
j < k, Eijk = 1, 2, 3 or 4}, is of size O(N ? 2 ) after ?-subsampling [8].
3
Parsimonious Triangular Model
Given the input network, now represented as a bag of triangular motifs, our goal is to make inference about the latent position vector ?i of each vertex i ? {1, . . . , N }. We take a mixed-membership
2
(si,jk , sj,ik , sk,ij )
Equivalence classes
x = si,jk = sj,ik = sk,ij
{1, 2, 3}, {4}
x = si,jk = sj,ik 6= sk,ij
{1, 2}, {3}, {4}
x = si,jk = sk,ij 6= sj,ik
{1, 3}, {2}, {4}
x = sj,ik = sk,ij 6= si,jk
{2, 3}, {1}, {4}
sk,ij 6= si,jk 6= sj,ik
{1, 2, 3}, {4}
Conditional probability of Eijk ? {1, 2, 3, 4}
Bxxx,1 Bxxx,1 Bxxx,1
,
,
, Bxxx,2
3
3
3
Bxx,1 Bxx,1
Discrete
,
, Bxx,2 , Bxx,3
2
2
Bxx,1
Bxx,1
Discrete
,
B
,
, Bxx,3
xx,2
2
2
B
B
Discrete Bxx,2 , xx,1
, xx,1
, Bxx,3
2
2
B0,1 B0,1 B0,1
Discrete
3 ,
3 ,
3 , B0,2
Discrete
Table 1: Equivalence classes and conditional probabilities of Eijk given si,jk , sj,ik , sk,ij (see text for details).
approach: each vertex i can take a mixture distribution over K latent roles governed by a mixedmembership vector ?i ? ?K?1 restricted to the (K ? 1)-simplex. Such vectors can be used for
performing community detection and link prediction, as demonstrated in Section 5. Following a design principle similar to the Mixed-Membership Triangular Model (MMTM) [8], our Parsimonious
Triangular Model (PTM) is essentially a latent-space model that defines the generative process for a
bag of triangular motifs. However, compared to the MMTM, the major advantage of the PTM lies in
its more compact and lower-dimensional nature that allows for more efficient inference algorithms
(see Global Update step in Section 4). The dimension of triangle-generating parameters in the PTM
is just O(K), rather than O(K 3 ) in the MMTM (see below for further discussion).
To form a triangular motif Eijk for each triplet of vertices (i, j, k), a triplet of role indices
si,jk , sj,ik , sk,ij ? {1, . . . , K} is first chosen based on the mixed-membership vectors ?i , ?j , ?k .
These indices designate the roles taken by each vertex participating in this triangular motif. There
are O(K 3 ) distinct configurations of such latent role triplet, and the MMTM uses a tensor of trianglegenerating parameters of the same size to define the probability of Eijk , one entry Bxyz for each
possible configuration (x, y, z). In the PTM, we reduce the number of such parameters by partitioning the O(K 3 ) configuration space into several groups, and then sharing parameters within the
same group. The partitioning is based on the number of distinct states in the configuration of the
role triplet: 1) if the three role indices are all in the same state x, the triangle-generating probability
is determined by Bxxx ; 2) if only two role indices exhibit the same state x (called majority role),
the probability of triangles is governed by Bxx , which is shared across different minority roles; 3)
if the three role indices are all distinct, the probability of triangular motifs depends on B0 , a single parameter independent of the role configurations. This sharing yields just O(K) parameters
B0 , Bxx , Bxxx , x ? {1, . . . , K}, allowing PTM to scale to far more latent roles than MMTM. A
similar idea was proposed in a-MMSB [5], using one parameter to determine inter-role link probabilities, rather than O(K 2 ) parameters for all pairs of distinct roles, as in the original MMSB [1].
Once the role triplet (si,jk , sj,ik , sk,ij ) is chosen, some of the triangular motifs can become
indistinguishable. To illustrate, in the case of x = si,jk = sj,ik 6= sk,ij , one cannot distinguish the
open triangle with i in the center (Eijk = 1) from that with j in the center (Eijk = 2), because both
are open triangles centered at a vertex with majority role x, and are thus structurally equivalent
under the given role configuration. Formally, this configuration induces a set of triangle equivalence
classes {{1, 2}, {3}, {4}} of all possible triangular motifs {1, 2, 3, 4}. We treat the triangular motifs
within the same equivalence class as stochastically equivalent; that is, the conditional probabilities
of events Eijk = 1 and Eijk = 2 are the same if x = si,jk = sj,ik 6= sk,ij . All possible cases are
enumerated as follows (see also Table 1):
1. If all three vertices have the same role x, all three open triangles are equivalent and the induced set of
equivalence classes is {{1, 2, 3}, {4}}. The probability of Eijk is determined by Bxxx ? ?1 , where
Bxxx,1 represents the total probability of sampling an open triangle from {1, 2, 3} and Bxxx,2 represents
the closed triangle probability. Thus, the probability of a particular open triangle is Bxxx,1 /3.
2. If only two vertices have the same role x (majority role), the probability of Eijk is governed by Bxx ? ?2 .
Here, Bxx,1 and Bxx,2 represent the open triangle probabilities (for open triangles centered at a vertex in
majority and minority role respectively), and Bxx,3 represents the closed triangle probability. There are two
possible open triangles with a vertex in majority role at the center, and hence each has probability Bxx,1 /2.
3. If all three vertices have distinct roles, the probability of Eijk depends on B0 ? ?1 , where B0,1 represents
the total probability of sampling an open triangle from {1, 2, 3} (regardless of the center vertex?s role) and
B0,2 represents the closed triangle probability.
To summarize, the PTM assumes the following generative process for a bag of triangular motifs:
? Choose B0 ? ?1 , Bxx ? ?2 and Bxxx ? ?1 for each role x ? {1, . . . , K} according to symmetric
Dirichlet distributions Dirichlet(?).
3
? For each vertex i ? {1, . . . , N }, draw a mixed-membership vector ?i ? Dirichlet (?).
? For each triplet of vertices (i, j, k) , i < j < k,
? Draw role indices si,jk ? Discrete (?i ), sj,ik ? Discrete (?j ), sk,ij ? Discrete (?k ).
? Choose a triangular motif Eijk ? {1, 2, 3, 4} based on B0 , Bxx , Bxxx and the configuration of
(si,jk , sj,ik , sk,ij ) (see Table 1 for the conditional probabilities).
It is worth pointing out that, similar to the MMTM, our PTM is not a generative model of networks
per se because (a) empty and single-edge motifs are not modeled, and (b) one can generate a set
of triangles that does not correspond to any network, because the generative process does not force
overlapping triangles to have consistent edge values. However, given a bag of triangular motifs E
extracted from a network, the above procedure defines a valid probabilistic model p(E | ?, ?) and
we can legitimately use it for performing posterior inference p(s, ?, B | E, ?, ?). We stress that our
goal is latent space inference, not network simulation.
4
Scalable Stochastic Variational Inference
In this section, we present a stochastic variational inference algorithm [10] for performing approximate inference under our model. Although it is also feasible to develop such algorithm for the
MMTM [8], the O(N K 3 ) computational complexity precludes its application to large numbers of
latent roles. However, due to the parsimonious O(K) parameterization of the PTM, our efficient
algorithm has only O(N K) complexity.
We adopted a structured mean-field approximation method, in which the true posterior of latent
variables p(s, ?, B | E, ?, ?) is approximated by a partially factorized distribution q(s, ?, B),
q(s, ?, B) =
Y
(i,j,k)?I
q(si,jk , sj,ik , sk,ij | ?ijk )
N
Y
q(?i | ?i )
K
Y
q(Bxxx | ?xxx )
x=1
i=1
K
Y
q(Bxx | ?xx )q(B0 | ?0 ),
x=1
where I = {(i, j, k) : i < j < k, Eijk = 1, 2, 3 or 4} and |I| = O(N ? 2 ). The strong dependencies
among the per-triangle latent roles (si,jk , sj,ik , sk,ij ) suggest that we should model them as a group,
rather than completely independent as in a naive mean-field approximation1 . Thus, the variational
posterior of (si,jk , sj,ik , sk,ij ) is the discrete distribution
.
q(si,jk = x, sj,ik = y, sk,ij = z) = qijk (x, y, z) = ?xyz
ijk ,
x, y, z = 1, . . . , K.
(1)
The posterior q(?i ) is a Dirichlet(?i ); and the posteriors of Bxxx , Bxx , B0 are parameterized as:
q(Bxxx ) = Dirichlet(?xxx ), q(Bxx ) = Dirichlet(?xx ), and q(B0 ) = Dirichlet(?0 ).
The mean field approximation aims to minimize the KL divergence KL(q k p) between the approximating distribution q and the true posterior p; it is equivalent to maximizing a lower bound
L(?, ?, ?) of the log marginal likelihood of the triangular motifs (based on Jensen?s inequality)
with respect to the variational parameters {?, ?, ?} [22].
.
log p(E | ?, ?) ? Eq [log p(E, s, ?, B | ?, ?)] ? Eq [log q(s, ?, B)] = L(?, ?, ?).
(2)
To simplify the notation, we decompose the variational objective L(?, ?, ?) into a global term and
a summation of local terms, one term for each triangular motif (see Appendix for details).
X
L(?, ?, ?) = g(?, ?) +
`(?ijk , ?, ?).
(3)
(i,j,k)?I
The global term g(?, ?) depends only on the global variational parameters ?, which govern the
posterior of the triangle-generating probabilities B, as well as the per-node mixed-membership parameters ?. Each local term `(?ijk , ?, ?) depends on per-triangle parameters ?ijk as well as the
.
global parameters. Define L(?, ?) = max? L(?, ?, ?), which is the variational objective achieved
by fixing the global parameters ?, ? and optimizing the local parameters ?. By equation (3),
L(?, ?) = g(?, ?) +
X
(i,j,k)?I
max `(?ijk , ?, ?).
?ijk
(4)
Stochastic variational inference is a stochastic gradient ascent algorithm [3] that maximizes L(?, ?),
based on noisy estimates of its gradient with respect to ? and ?. Whereas computing the true
gradient ?L(?, ?) involves a costly summation over all triangular motifs as in (4), an unbiased
noisy approximation of the gradient can be obtained much more cheaply by summing over a small
subsample of triangles. With this unbiased estimate of the gradient and a suitable adaptive step size,
the algorithm is guaranteed to converge to a stationary point of the variational objective L(?, ?) [18].
1
We tested a naive mean-field approximation, and it performed very poorly. This is because the tensor of
role probabilities q(x, y, z) is often of high rank, whereas naive mean-field is a rank-1 approximation.
4
Algorithm 1 Stochastic Variational Inference
1: t = 0. Initialize the global parameters ? and ?.
2: Repeat the following steps until convergence.
(1) Sample a mini-batch of triangles S.
(2) Optimize the local parameters qijk (x, y, z) for all sampled triangles in parallel by (6).
(3) Accumulate sufficient statistics for the natural gradients of ?, ? (and then discard qijk (x, y, z)).
(4) Optimize the global parameters ? and ? by the stochastic natural gradient ascent rule (7).
(5) ?t ? ?0 (?1 + t)?? , t ? t + 1.
In our setting, the most natural way to obtain an unbiased gradient of L(?, ?) is to sample a ?minibatch? of triangular motifs at each iteration, and then average the gradient of local terms in (4) only
for these sampled triangles. Formally, let m be the total number of triangles and define
LS (?, ?) = g(?, ?) +
m
|S|
X
(i,j,k)?S
max `(?ijk , ?, ?),
?ijk
(5)
where S is a mini-batch of triangles sampled uniformly at random. It is easy to verify that
ES [LS (?, ?)] = L(?, ?), hence ?LS (?, ?) is unbiased: ES [?LS (?, ?)] = ?L(?, ?).
Exact Local Update. To obtain the gradient ?LS (?, ?), one needs to compute the optimal local
variational parameters ?ijk (keeping ? and ? fixed) for each sampled triangle (i, j, k) in the minibatch S; these optimal ?ijk ?s are then used in equation (5) to compute ?LS (?, ?). Taking partial
derivatives of (3) with respect to each local term ?xyz
ijk and setting them to zero, we get for distinct
x, y, z ? {1,n. . . , K},
o
?xyz
ijk ? exp Eq [log B0,2 ]I[Eijk = 4] + Eq [log(B0,1 /3)]I[Eijk 6= 4] + Eq [log ?i,x + log ?j,x + log ?k,x ] .
(6)
xxy
xxx
See Appendix for the update equations of ?ijk and ?ijk (x 6= y).
O(K) Approximation to Local Update. For each sampled triangle (i, j, k), the exact local update
requires O(K 3 ) work to solve for all ?xyz
ijk , making it unscalable. To enable a faster local update, we replace qijk (x, y, z | ?ijk ) in (1) with a simpler ?mixture-of-deltas? variational distribution,
X
X
aaa
abc
qijk (x, y, z | ?ijk ) =
?ijk
I[x = y = z = a] +
?ijk
I[x = a, y = b, z = c],
a
(a,b,c)?A
P aaa
where A is a randomly chosen set of triples (a, b, c) with size O(K), and
a ?ijk +
P
abc
?
=
1.
In
other
words,
we
assume
the
probability
mass
of
the
variational
poste(a,b,c)?A ijk
rior q(si,jk , sj,ik , sk,ij ) falls entirely on the K ?diagonal? role combinations (a, a, a) as well as
O(K) randomly chosen ?off-diagonals? (a, b, c). Conveniently, the ? update equations are identical
to their ? counterparts as in (6), except that we normalize over the ??s instead.
In our implementation, we generate A by picking 3K combinations of the form (a, a, b), (a, b, a) or
(a, a, b), and another 3K combinations of the form (a, b, c), thus mirroring the parameter structure
of B. Furthermore, we re-pick A every time we perform the local update on some triangle (i, j, k),
thus avoiding any bias due to a single choice of A. We find that this approximation works as well
as the full parameterization in (1), yet requires only O(K) work per sampled triangle. Note that
any choice of A yields a valid lower bound to the true log-likelihood; this follows from standard
variational inference theory.
Global Update. We appeal to stochastic natural gradient ascent [2, 20, 10] to optimize the global
parameters ? and ?, as it greatly simplifies the update rules while maintaining the same asymptotic
? S (?, ?) is obtained
convergence properties as classical stochastic gradient. The natural gradient ?L
by a premultiplication of the ordinary gradient ?LS (?, ?) with the inverse of the Fisher information
of the variational posterior q. See Appendix for the exact forms of the natural gradients with respect
to ? and ?. To update the parameters ? and ?, we apply the stochastic natural gradient ascent rule
? ? LS (?t , ?t ), ?t+1 = ?t + ?t ?
? ? LS (?t , ?t ),
?t+1 = ?t + ?t ?
(7)
where
given by ?t = ?0 (?1 + t)?? . To ensure convergence, the ?0 , ?1 , ? are set such
Pthe step size is P
that t ?2t < ? and t ?t = ? (Section 5 has our experimental values). The global update only
costs O(N K) time per iteration due to the parsimonious O(K) parameterization of our PTM.
Our full inferential procedure is summarized in Algorithm 1. Within a mini-batch S, steps 2-3
can be trivially parallelized across triangles. Furthermore, the local parameters qijk (x, y, z) can
5
be discarded between iterations, since all natural gradient sufficient statistics can be accumulated
during the local update. This saves up to tens of gigabytes of memory on million-node networks.
5
Experiments
We demonstrate that our stochastic variational algorithm achieves latent space recovery accuracy
comparable to or better than prior work, but in only a fraction of the time. In addition, we perform
heldout link prediction and likelihood lower bound (i.e. perplexity) experiments on several large
real networks, showing that our approach is orders of magnitude more scalable than previous work.
5.1 Generating Synthetic Data
We use two latent space models as the simulator for our experiments ? the MMSB model [1]
(which the MMSB batch variational algorithm solves for), and a model that produces power-law networks from a latent space (see Appendix for details). Briefly, the MMSB model produces networks
with ?blocks? of nodes characterized by high edge probabilities, whereas the Power-Law model produces ?communities? centered around a high-degree hub node. We show that our algorithm rapidly
and accurately recovers latent space roles based on these two notions of node-relatedness.
For both models, we synthesized ground truth role vectors ?i ?s to generate networks of varying difficulty. We generated networks with N ? {500, 1000, 2000, 5000, 10000} nodes, with the number of
roles growing as K = N/100, to simulate the fact that large networks can have more roles [24]. We
generated ?easy? networks where each ?i contains 1 to 2 nonzero roles, and ?hard? networks with 1
to 4 roles per ?i . A full technical description of our networks can be found in the Appendix.
5.2 Latent Space Recovery on Synthetic Data
Task and Evaluation. Given one of the synthetic networks, the task is to recover estimates ??i ?s
of the original latent space vectors ?i ?s used to generate the network. Because we are comparing
different algorithms (with varying model assumptions) on different networks (generated under their
own assumptions), we standardize our evaluation by thresholding all outputs ??i ?s at 1/8 = 0.125
(because there are no more than 4 roles per ?i ), and use Normalized Mutual Information (NMI) [12,
23], a commonly-used measure of overlapping cluster accuracy, to compare the ??i ?s with the true
?i ?s (thresholded similarly). In other words, we want to recover the set of non-zero roles.
Competing Algorithms and Initialization. We tested the following algorithms:
? Our PTM stochastic variational algorithm. We used ? = 50 subsampling2 (i.e. 50
= 1225 triangles
2
per node), hyperparameters ? = ? = 0.1, and a 10% minibatch size with step-size ?0 (?1 + t)? , where
?0 = 100, ?1 = 10000, ? = ?0.5, and t is the iteration number. Our algorithm has a runtime complexity
of O(N ? 2 K). Since our algorithm can be run in parallel, we conduct all experiments using 4 threads ?
compared to single-threaded execution, we observe this reduces runtime to about 40%.
? MMTM collapsed blocked Gibbs sampler, according to [8]. We also used ? = 50 subsampling. The
algorithm has O(N ? 2 K 3 ) time complexity, and is single-threaded.
? PTM collapsed blocked Gibbs sampler. Like the above MMTM Gibbs, but using our PTM model.
Because of block sampling, complexity is still O(N ? 2 K 3 ). Single-threaded.
? MMSB batch variational [1]. This algorithm has O(N 2 K 2 ) time complexity, and is single-threaded.
All these algorithms are locally-optimal search procedures, and thus sensitive to initial values. In
particular, if nodes from two different roles are initialized to have the same role, the output is likely
to merge all nodes in both roles into a single role. To ensure a meaningful comparison, we therefore
provide the same fixed initialization to all algorithms ? for every role x, we provide 2 example
nodes i, and initialize the remaining nodes to have random roles. In other words, we seed 2% of the
nodes with one of their true roles, and let the algorithms proceed from there3 .
Recovery Accuracy. Results of our method, MMSB Variational, MMTM Gibbs and PTM
Gibbs are in Figure 1. Our method exhibits high accuracy (i.e. NMI close to 1) across almost all
networks, validating its ability to recover latent roles under a range of network sizes N and roles
K. In contrast, as N (and thus K) increases, MMSB Variational exhibits degraded performance
despite having converged, while MMTM/PTM Gibbs converge to and become stuck in local minima
2
We chose ? = 50 because almost all our synthetic networks have median degree ? 50. Choosing ? above
the median degree ensures that more than 50% of the nodes will receive all their assigned triangles.
3
In general, one might not have any ground truth roles or labels to seed the algorithm with. For such cases,
our algorithm can be initialized as follows: rank all nodes according to the number of 3-triangles they touch,
and then seed the top K nodes with different roles x. The intuition is that ?good? roles may be defined as
having a high ratio of 3-triangles to 2-triangles among participating nodes.
6
Latent space recovery on Synthetic Power-Law and MMSB Networks
Accuracy vs MMSB, MMTM
Runtime
Full vs Mini-Batch
0.8
0.6
0.6
Our method
MMTM Gibbs
PTM Gibbs
MMSB Variational
0
0.51
2
5
Our method
MMTM Gibbs
PTM Gibbs
MMSB Variational
0.2
0
10
0.51
1,000s of nodes N
0.6
0.6
0
Our method
MMTM Gibbs
PTM Gibbs
MMSB Variational
0.51
2
5
10
0.4
Our method
MMTM Gibbs
PTM Gibbs
MMSB Variational
0.2
10
1,000s of nodes N
0
0.51
2
5
10
0.8
600
400
0
0.6
0.4
0.2
200
NMI: Power?Law hard
1
0.8
NMI
NMI
NMI: Power?Law easy
1
0.2
5
800
0.51
2
5
0
0
10
6
8
10
Data passes
Runtime: Power?Law hard
1
Our method
MMTM Gibbs
PTM Gibbs
MMSB Variational
800
0.8
600
0.6
400
0.4
200
0.2
1,000s of nodes N
4
Convergence: Power?Law Hard, N=1000
1000
0
10% mini?batches
Full batch variational
2
1,000s of nodes N
1,000s of nodes N
0.8
0.4
2
1
Our method
MMTM Gibbs
PTM Gibbs
MMSB Variational
NMI
0.2
0.4
Seconds per data pass
0.4
1000
NMI
0.8
Convergence: MMSB Hard, N=1000
Runtime: MMSB hard
Seconds per data pass
NMI: MMSB hard
1
NMI
NMI
NMI: MMSB easy
1
0.51
2
5
10
0
0
10% mini?batches
Full batch variational
2
4
6
8
10
Data passes
1,000s of nodes N
Figure 1: Synthetic Experiments. Left/Center: Latent space recovery accuracy (measured using Normalized
Mutual Information) and runtime per data pass for our method and baselines. With the MMTM/PTM Gibbs and
MMSB Variational algorithms, the larger networks did not complete within 12 hours. The runtime plots
for MMSB easy and Power-Law easy experiments are very similar to the hard experiments, so we omit them.
Right: Convergence of our stochastic variational algorithm (with 10% minibatches) versus a batch variational
version of our algorithm. On N = 1, 000 networks, our minibatch algorithm converges within 1-2 data passes.
Network Type
Name
Nodes N
Edges
Our Method AUC
MMSB Variational AUC
Link Prediction on Synthetic and Real Networks
Synthetic
Dictionary
Biological arXiv Collaboration
MMSB Power-law
Roget Odlis
Yeast
GrQc
AstroPh
2.0K
2.0K
1.0K
2.9K
2.4K
5.2K
18.7K
40K
40K
3.6K
16K
6.6K
14K
200K
0.93
0.91
0.97
0.94
0.65
0.72
0.81
0.88
0.75
0.81
0.82
0.77
0.86
?
Internet
Stanford
282K
2.0M
Social
Youtube
1.1M
3.0M
0.94
?
0.71
?
Table 2: Link Prediction Experiments, measured using AUC. Our method performs similarly to MMSB
Variational on synthetic data. MMSB performs better on smaller, non-social networks, while we perform
better on larger, social networks (or MMSB fails to complete due to lack of scalability). Roget, Odlis and
Yeast networks are from Pajek datasets (http://vlado.fmf.uni-lj.si/pub/networks/data/);
the rest are from Stanford Large Network Dataset Collection (http://snap.stanford.edu/data/).
(even after many iterations and trials), without reaching a good solution4 . We believe our method
maintains high accuracy due to its parsimonious O(K) parameter structure ? compared to MMSB
Variational?s O(K 2 ) block matrix and MMTM Gibbs?s O(K 3 ) tensor of triangle parameters.
Having fewer parameters may lead to better parameter estimates, and better task performance.
Runtime. On the larger networks, MMSB Variational and MMTM/PTM Gibbs did not even
finish execution due to their high runtime complexity. This can be seen in the runtime graphs, which
plot the time taken per data pass5 : at N = 5, 000, all 3 baselines require orders of magnitude more
time than our method does at N = 10, 000. Recall that K = O(N ), and that our method has time
complexity O(N ? 2 K), while MMSB Variational has O(N 2 K 2 ), and MMTM/PTM Gibbs has
O(N ? 2 K 3 ) ? hence, our method runs in O(N 2 ) on these synthetic networks, while the others run
in O(N 4 ). This highlights the need for network methods that are linear in N and K.
Convergence of stochastic vs. batch algorithms. We also demonstrate that our stochastic variational algorithm with 10% mini-batches converges much faster to the correct solution than a nonstochastic, full-batch implementation. The convergence graphs in Figure 1 plot NMI as a function of
data passes, and show that our method converges to the (almost) correct solution in 1-2 data passes.
In contrast, the batch algorithm takes 10 or more data passes to converge.
5.3 Heldout Link Prediction on Real and Synthetic Networks
We compare MMSB Variational and our method on a link prediction task, in which 10% of
the edges are randomly removed (set to zero) from the network, and, given this modified network,
the task is to rank these heldout edges against an equal number of randomly chosen non-edges.
For MMSB, we simply ranked according to the link probability under the MMSB model. For our
4
With more generous initializations (20 out of 100 ground truth nodes per role), MMTM/PTM Gibbs
converge correctly. In practice however, this is an unrealistic amount of prior knowledge to expect. We believe
that more sophisticated MCMC schemes may fix this convergence issue with MMTM/PTM models.
5
One data pass is defined as performing variational inference on m triangles, where m is equal to the total
number of triangles. This takes the same amount of time for both the stochastic and batch algorithms.
7
Real Networks ? Statistics, Experimental Settings and Runtime
Nodes Edges
?
2,3-Tris (for ?) Frac. 3-Tris Roles K Threads
Name
Brightkite
Brightkite
Slashdot Feb 2009
Slashdot Feb 2009
Stanford Web
Stanford Web
Berkeley-Stanford Web
Youtube
58K
||
82K
||
282K
||
685K
1.1M
214K
||
504K
||
2.0M
||
6.6M
3.0M
50
||
50
||
20
50
30
50
3.5M
||
9.0M
||
11.4M
25.0M
57.6M
36.0M
0.11
||
0.030
||
0.57
0.42
0.55
0.053
64
300
100
300
5
100
100
100
Runtime (10 data passes)
4
4
4
4
4
4
8
8
34 min
2.6 h
2.4 h
6.7 h
10 min
6.3 h
15.2 h
9.1 h
Table 3: Real Network Experiments. All networks were taken from the Stanford Large Network Dataset
Collection; directed networks were converted to undirected networks via symmetrization. Some networks were
run with more than one choice of settings. Runtime is the time taken for 10 data passes (which was more than
sufficient for convergence on all networks, see Figure 2).
Real Networks ? Heldout lower bound of our method
x 10
Brightkite K=64, 4 threads
6
7
x 10
?1
?0.5
x 10
Brightkite K=300, 4 threads
Slashdot K=100, 4 threads
7
6
?1
x 10
?1
x 10
6
x 10
0
7
?1
x 10
Slashdot K=300, 4 threads
?2
?2
?3
?4
Heldout LB
?1.5
?2
Training LB
?1.5
Training LB
?1
Heldout LB
?1.5
Heldout LB
?1
Training LB
?2.5
?2
?3
?2.5
?3.5
?3
?1.5
0
0.1
0.2
0.3
0.4
0.5
0.6
?2
0.7
?2
0
0.5
1
Hours
7
x 10
7
6
x 10
?2.5
0
2.5
?4
0
?2.5
3
0.5
1
x 10
1.5
?6
2.5
2
1
2
3
Hours
Stanford K=100, 4 threads
7
8
x 10
?0.5
?0.6
x 10
4
5
6
?4.5
7
Hours
Berk?Stan K=100, 8 threads
7
8
x 10
?1.2
?0.8
?1.4
?0.5
?1.6
?1.8
?1.4
?2
Heldout LB
?1
?1.2
Training LB
?1
Heldout LB
?5
Training LB
?3.5
?2.5
x 10
Youtube K=100, 8 threads
7
x 10
?1
?1
?1.5
?4
?1.6
?3
0
?4
?3.5
0
?3
?2
Training LB
?1.5
Training LB
2
Hours
Stanford K=5, 4 threads
Heldout LB
?1
1.5
6
x 10
?2
Heldout LB
Training LB
?1.5
Heldout LB
7
?0.5
0.02
0.04
0.06
0.08
0.1
Hours
0.12
0.14
0.16
?4.5
?10
0
1
2
3
4
5
6
?1.5
7
?1.8
0
Hours
?2.2
2
4
6
8
Hours
10
12
14
?2.4
16
?1.5
0
2
4
6
8
?2
10
Hours
Figure 2: Real Network Experiments. Training and heldout variational lower bound (equivalent to perplexity) convergence plots for all experiments in Table 3. Each plot shows both lower bounds over 10 data passes
(i.e. 100 iterations with 10% minibatches). In all cases, we observe convergence between 2-5 data passes, and
the shape of the heldout curve closely mirrors the training curve (i.e. no overfitting).
method, we ranked possible links i ? j by the probability that the triangle (i, j, k) will include edge
i ? j, marginalizing over all choices of the third node k and over all possible role choices for nodes
i, j, k. Table 2 displays results for a variety of networks, and our triangle-based method does better
on larger social networks than the edge-based MMSB. This matches what has been observed in the
network literature [24], and further validates our triangle modeling assumptions.
5.4 Real World Networks ? Convergence on Heldout Data
Finally, we demonstrate that our approach is capable of scaling to large real-world networks, achieving convergence in a fraction of the time reported by recent work on scalable network modeling.
Table 3 lists the networks that we tested on, ranging in size from N = 58K to N = 1.1M. With
a few exceptions, the experiments were conducted with ? = 50 and 4 computational threads. In
particular, for every network, we picked ? to be larger than the average degree, thus minimizing the
amount of triangle data lost to subsampling. Figure 2 plots the training and heldout variational lower
bound for several experiments, and shows that our algorithm always converges in 2-5 data passes.
We wish to highlight two experiments, namely the Brightkite network for K = 64, and the Stanford
network for K = 5 (the first and fifth rows respectively in Table 3). Gopalan et al. ([5]) reported
convergence on Brightkite in 8 days using their scalable a-MMSB algorithm with 4 threads, while
Ho et al. ([8]) converged on Stanford in 18.5 hours using the MMTM Gibbs algorithm on 1 thread.
In both settings, our algorithm is orders of magnitude faster ? using 4 threads, it converged on
Brightkite and Stanford in just 12 and 4 minutes respectively, as seen in Figure 2.
In summary, we have constructed a latent space network model with O(N K) parameters and devised a stochastic variational algorithm for O(N K) inference. Our implementation allows network
analysis with millions of nodes N and hundreds of roles K in hours on a single multi-core machine,
with competitive or improved accuracy for latent space recovery and link prediction. These results
are orders of magnitude faster than recent work on scalable latent space network modeling [5, 8].
Acknowledgments
This work was supported by AFOSR FA9550010247, NIH 1R01GM093156 and DARPA
FA87501220324 to Eric P. Xing. Qirong Ho is supported by an A-STAR, Singapore fellowship.
Junming Yin is supported by a Ray and Stephanie Lane Research Fellowship.
8
References
[1] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. Journal of
Machine Learning Research, 9:1981?2014, 2008.
[2] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[3] L. Bottou. Stochastic learning. Advanced Lectures on Machine Learning, pages 146?168, 2004.
[4] M. Carman, F. Crestani, M. Harvey, and M. Baillie. Towards query log based personalization using
topic models. In Proceedings of the 19th ACM international conference on Information and knowledge
management (CIKM ?10), pages 1849?1852, 2010.
[5] P. Gopalan, D. Mimno, S. Gerrish, M. Freedman, and D. Blei. Scalable inference of overlapping communities. In Advances in Neural Information Processing Systems 25, pages 2258?2266. 2012.
[6] M. Granovetter. The strength of weak ties. American Journal of Sociology, 78(6):1360?1380, 1973.
[7] Q. Ho, A. Parikh, and E. Xing. A multiscale community blockmodel for network exploration. Journal of
the American Statistical Association, 107(499), 2012.
[8] Q. Ho, J. Yin, and E. Xing. On triangular versus edge representations ? towards scalable modeling of
networks. In Advances in Neural Information Processing Systems 25, pages 2141?2149. 2012.
[9] P. Hoff, A. Raftery, and M. Handcock. Latent space approaches to social network analysis. Journal of the
American Statistical Association, 97(460):1090?1098, 2002.
[10] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 14:1303?1347, 2013.
[11] D. Hunter, S. Goodreau, and M. Handcock. Goodness of fit of social network models. Journal of the
American Statistical Association, 103(481):248?258, 2008.
[12] A. Lancichinetti, S. Fortunato, and J. Kert?esz. Detecting the overlapping and hierarchical community
structure in complex networks. New Journal of Physics, 11(3):033015+, 2009.
[13] Y. Low, D. Agarwal, and A. Smola. Multiple domain user personalization. In Proceedings of the 17th
ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ?11), pages
123?131, 2011.
[14] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems 22, pages 1276?1284. 2009.
[15] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon. Network motifs: Simple
building blocks of complex networks. Science, 298(5594):824?827, 2002.
[16] M. Morris, M. Handcock, and D. Hunter. Specification of exponential-family random graph models:
Terms and computational aspects. Journal of Statistical Software, 24(4), 2008.
[17] M. Newman, S. Strogatz, and D. Watts. Random graphs with arbitrary degree distributions and their
applications. Physical Review E, 64(2), 2001.
[18] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics,
22(3):400?407, 1951.
[19] P. Sarkar and A. Moore. Dynamic social network analysis using latent space models. ACM SIGKDD
Explorations Newsletter, 7(2):31?40, 2005.
[20] M. Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649?1681,
2001.
[21] G. Simmel and K. Wolff. The Sociology of Georg Simmel. Free Press, 1950.
[22] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[23] J. Xie, S. Kelley, and B. Szymanski. Overlapping community detection in networks: the state of the art
and comparative study. ACM Computing Surveys, 45(4), 2013.
[24] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. ACM, 2012.
9
| 4978 |@word trial:1 version:1 briefly:1 open:13 closure:1 d2:1 simulation:1 pick:1 dramatic:1 reduction:2 initial:1 configuration:8 contains:1 pub:1 existing:3 comparing:1 si:19 yet:1 must:1 informative:3 kdd:1 shape:1 plot:6 update:13 v:3 stationary:1 generative:4 fewer:1 parameterization:3 core:1 junming:2 ptm:26 blei:3 detecting:1 node:34 simpler:1 mathematical:1 constructed:1 become:2 ik:18 lancichinetti:1 fitting:1 combine:1 ray:1 inter:1 growing:1 simulator:1 multi:1 little:1 actual:1 frac:1 xx:5 notation:1 maximizes:1 factorized:2 mass:1 what:2 pursue:1 r01gm093156:1 unobserved:1 poste:1 every:3 berkeley:1 growth:1 runtime:13 tie:1 scaled:1 partitioning:2 normally:1 omit:1 local:16 treat:1 kert:1 despite:1 approximately:1 merge:1 might:1 chose:1 initialization:3 equivalence:5 limited:1 range:1 directed:1 acknowledgment:1 practice:2 block:4 lost:1 procedure:5 inferential:2 word:3 refers:1 griffith:1 suggest:1 astroph:1 get:1 cannot:2 close:1 selection:1 context:1 applying:1 collapsed:2 optimize:3 equivalent:5 demonstrated:1 center:6 maximizing:1 regardless:1 l:9 survey:1 shen:1 recovery:9 mixedmembership:1 subgraphs:2 rule:3 importantly:1 gigabyte:1 handle:1 exploratory:1 notion:2 annals:1 user:4 exact:3 xxy:1 us:1 pa:3 element:2 standardize:1 approximated:1 jk:18 trend:1 observed:2 role:63 wang:1 capture:2 worst:1 ensures:1 connected:4 kashtan:1 removed:1 intuition:1 govern:2 complexity:10 dynamic:1 roget:2 upon:1 eric:2 completely:1 triangle:59 darpa:1 represented:2 distinct:8 fast:4 effective:1 query:1 newman:1 choosing:1 larger:7 widely:1 solve:1 stanford:12 snap:1 tris:2 precludes:1 triangular:36 ability:1 statistic:6 amari:1 noisy:2 validates:1 online:1 advantage:1 propose:2 rapidly:1 qirong:2 poorly:2 achieve:1 subgraph:1 pthe:1 description:1 participating:2 normalize:1 scalability:3 convergence:15 empty:3 cluster:1 produce:3 generating:6 comparative:1 converges:6 depending:2 alon:1 develop:2 fixing:1 recurrent:1 illustrate:1 measured:3 ij:18 school:3 b0:16 eq:5 fa87501220324:1 solves:1 strong:1 c:3 involves:1 closely:1 correct:2 stochastic:24 centered:4 exploration:2 enable:1 adjacency:1 require:2 fix:1 decompose:1 biological:1 summation:2 designate:1 enumerated:1 around:2 ground:4 exp:1 goodreau:1 seed:3 pointing:1 major:1 achieves:2 dictionary:1 generous:1 bag:9 label:1 esz:1 sensitive:1 symmetrization:1 robbins:1 hoffman:1 always:1 aim:1 modified:1 rather:3 reaching:1 varying:2 rank:4 likelihood:3 greatly:1 contrast:2 blockmodel:2 sigkdd:3 baseline:2 inference:31 motif:29 membership:9 accumulated:1 typically:1 lj:1 fa9550010247:1 relation:3 semantics:2 issue:2 fidelity:1 aforementioned:1 among:4 stateof:1 development:2 art:3 initialize:2 mutual:2 marginal:1 field:7 construct:1 once:1 having:3 equal:2 sampling:3 hoff:1 identical:1 represents:5 simplex:1 others:1 fundamentally:1 simplify:1 few:1 randomly:4 composed:1 simultaneously:1 preserve:2 divergence:1 individual:1 subsampled:1 itzkovitz:1 slashdot:4 detection:3 mining:2 evaluation:2 mixture:2 yielding:1 personalization:3 edge:17 capable:1 partial:1 modest:1 conduct:1 initialized:2 re:1 sociology:2 leskovec:1 modeling:8 goodness:1 ordinary:1 cost:3 vertex:22 entry:1 hundred:2 successful:1 conducted:1 reported:2 dependency:2 synthetic:12 international:2 probabilistic:5 off:1 physic:1 picking:1 unscalable:1 management:1 containing:1 choose:2 stochastically:1 granovetter:1 derivative:1 pajek:1 american:4 potential:1 converted:1 orr:1 star:2 summarized:1 coefficient:1 matter:1 depends:5 performed:2 picked:1 closed:8 analyze:1 xing:5 competitive:3 recover:3 parallel:3 maintains:1 bayes:1 monro:1 minimize:1 formed:3 accuracy:12 degraded:1 efficiently:1 miller:1 yield:4 correspond:1 weak:1 accurately:1 hunter:2 worth:1 converged:3 reach:1 sharing:3 against:2 recovers:1 sampled:6 dataset:2 recall:1 knowledge:3 improves:1 infers:1 sophisticated:1 centric:1 higher:4 day:2 xxx:3 xie:1 improved:3 done:1 furthermore:3 just:5 smola:1 until:1 web:3 touch:1 multiscale:1 overlapping:5 lack:1 minibatch:4 defines:2 yeast:2 grows:1 believe:2 building:2 name:2 verify:1 unbiased:5 true:6 counterpart:1 normalized:2 hence:4 assigned:1 symmetric:1 nonzero:1 moore:1 adjacent:2 indistinguishable:1 transitivity:1 during:1 auc:3 stress:1 complete:2 demonstrate:4 performs:3 newsletter:1 ranging:1 variational:50 parikh:1 nih:1 physical:1 million:4 association:3 interpret:1 synthesized:1 accumulate:1 mellon:3 blocked:3 significant:1 gibbs:26 paisley:1 trivially:1 similarly:2 handcock:3 kelley:1 specification:1 base:1 feb:2 posterior:8 own:2 recent:3 optimizing:1 discard:1 perplexity:2 grqc:1 certain:1 harvey:1 inequality:1 seen:2 minimum:1 parallelized:1 determine:1 converge:4 full:7 multiple:1 reduces:2 technical:1 faster:6 characterized:1 match:1 devised:1 prediction:11 scalable:13 essentially:1 cmu:3 arxiv:1 iteration:7 represent:3 adopting:1 agarwal:1 achieved:2 receive:1 whereas:3 addition:1 want:1 fellowship:2 chklovskii:1 median:2 crucial:2 rest:1 ascent:5 pass:13 induced:1 validating:1 undirected:1 jordan:2 extracting:2 structural:3 yang:1 embeddings:1 enough:1 easy:6 qho:1 variety:1 finish:1 fit:1 nonstochastic:1 competing:1 reduce:1 idea:1 simplifies:1 thread:15 motivated:1 fmf:1 proceed:1 mirroring:1 ignored:1 se:1 gopalan:2 amount:4 nonparametric:1 ten:1 locally:1 induces:1 morris:1 generate:4 http:2 singapore:1 delta:1 cikm:1 per:16 correctly:1 diverse:1 carnegie:3 discrete:9 milo:1 georg:1 group:4 key:1 threshold:1 achieving:2 thresholded:1 eijk:19 graph:6 fraction:2 run:4 inverse:1 parameterized:1 almost:3 family:2 parsimonious:8 draw:2 appendix:5 scaling:1 comparable:1 entirely:1 bound:7 internet:1 guaranteed:1 distinguish:1 display:1 sato:1 strength:1 software:1 lane:1 aspect:1 simulate:1 min:2 performing:5 format:1 structured:2 according:4 combination:3 watt:1 smaller:3 across:3 nmi:13 stephanie:1 making:2 aaa:2 simmel:2 restricted:1 principally:1 taken:4 fienberg:1 equation:4 turn:1 count:1 mechanism:1 xyz:4 adopted:1 apply:2 observe:2 hierarchical:1 triadic:1 premultiplication:1 save:1 batch:18 carman:1 ho:5 original:2 assumes:1 clustering:1 subsampling:6 dirichlet:7 ensure:2 remaining:1 top:1 maintaining:1 include:1 graphical:1 build:1 approximating:1 classical:1 tensor:4 objective:3 bxyz:1 quantity:1 costly:1 diagonal:2 exhibit:3 gradient:20 link:14 entity:2 majority:5 topic:2 ergm:2 argue:1 threaded:4 minority:2 index:6 modeled:1 mini:8 ratio:1 downsampling:1 minimizing:1 fortunato:1 design:3 implementation:3 perform:4 allowing:2 datasets:1 discarded:2 defining:1 looking:1 precise:1 lb:16 arbitrary:1 community:8 sarkar:1 bxx:21 namely:2 pair:1 kl:2 hour:13 able:2 beyond:1 usually:1 pattern:2 below:1 challenge:1 summarize:1 encompasses:1 max:3 memory:1 wainwright:1 power:9 event:1 suitable:1 natural:11 treated:1 force:1 difficulty:1 ranked:2 unrealistic:1 advanced:1 representing:1 scheme:1 epxing:1 stan:1 legitimately:1 raftery:1 naive:4 text:1 prior:2 literature:2 discovery:1 review:1 marginalizing:1 relative:1 asymptotic:1 law:9 fully:1 expect:1 heldout:16 highlight:2 mixed:9 afosr:1 lecture:1 proven:1 versus:2 triple:6 foundation:1 degree:9 sufficient:4 consistent:1 principle:7 thresholding:1 collaboration:1 row:1 succinctly:2 mmtm:27 summary:1 repeat:1 supported:3 keeping:1 free:1 rior:1 bias:1 brightkite:7 fall:1 taking:1 fifth:1 mimno:1 curve:2 dimension:4 valid:2 world:2 evaluating:1 stuck:1 commonly:2 adaptive:1 preprocessing:1 collection:2 far:1 social:10 sj:18 mmsb:41 compact:1 approximate:1 relatedness:1 uni:1 keep:1 global:12 overfitting:1 summing:1 pittsburgh:3 szymanski:1 search:1 latent:44 triplet:7 sk:18 table:9 nature:1 bottou:1 complex:3 domain:1 did:2 blockmodels:1 linearly:1 subsample:1 hyperparameters:1 freedman:1 succinct:3 fashion:1 structurally:1 position:3 fails:1 wish:1 exponential:4 lie:1 governed:3 third:1 minute:1 unusable:1 showing:1 mitigates:1 jensen:1 hub:1 appeal:1 list:1 workshop:1 mirror:1 airoldi:1 magnitude:5 execution:2 baillie:1 yin:3 simply:2 likely:1 cheaply:1 conveniently:1 strogatz:1 partially:1 applies:1 truth:4 gerrish:1 extracted:1 abc:2 minibatches:2 acm:6 conditional:4 viewed:1 goal:2 towards:2 shared:1 replace:1 feasible:1 fisher:1 youtube:4 hard:8 specifically:1 determined:2 reducing:1 uniformly:2 sampler:3 except:1 degradation:1 wolff:1 berk:1 called:4 total:5 pas:4 e:2 experimental:2 ijk:22 meaningful:1 exception:1 formally:2 mcmc:1 tested:3 avoiding:1 |
4,395 | 4,979 | Relevance Topic Model for Unstructured Social
Group Activity Recognition
Fang Zhao
Yongzhen Huang
Liang Wang
Tieniu Tan
Center for Research on Intelligent Perception and Computing
Institute of Automation, Chinese Academy of Sciences
{fang.zhao,yzhuang,wangliang,tnt}@nlpr.ia.ac.cn
Abstract
Unstructured social group activity recognition in web videos is a challenging task
due to 1) the semantic gap between class labels and low-level visual features and 2)
the lack of labeled training data. To tackle this problem, we propose a ?relevance
topic model? for jointly learning meaningful mid-level representations upon bagof-words (BoW) video representations and a classifier with sparse weights. In
our approach, sparse Bayesian learning is incorporated into an undirected topic
model (i.e., Replicated Softmax) to discover topics which are relevant to video
classes and suitable for prediction. Rectified linear units are utilized to increase the
expressive power of topics so as to explain better video data containing complex
contents and make variational inference tractable for the proposed model. An
efficient variational EM algorithm is presented for model parameter estimation
and inference. Experimental results on the Unstructured Social Activity Attribute
dataset show that our model achieves state of the art performance and outperforms
other supervised topic model in terms of classification accuracy, particularly in the
case of a very small number of labeled training videos.
1
Introduction
The explosive growth of web videos makes automatic video classification important for online video
search and indexing. Classifying short video clips which contain simple motions and actions has
been solved well in standard datasets (such as KTH [1], UCF-Sports [2] and UCF50 [3]). However,
detecting complex activities, specially social group activities [4], in web videos is a more difficult
task because of unstructured activity context and complex multi-object interaction.
In this paper, we focus on the task of automatic classification of unstructured social group activities
(e.g., wedding dance, birthday party and graduation ceremony in Figure 1), where the low-level
features have innate limitations in semantic description of the underlying video data and only a
few labeled training videos are available. Thus, a common method is to learn human-defined (or
semi-human-defined) semantic concepts as mid-level representations to help video classification
[4]. However, those human defined concepts are hardly generalized to a larger or new dataset. To
discover more powerful representations for classification, we propose a novel supervised topic model
called ?relevance topic model? to automatically extract latent ?relevance? topics from bag-of-words
(BoW) video representations and simultaneously learn a classifier with sparse weights.
Our model is built on Replicated Softmax [5], an undirected topic model which can be viewed as
a family of different-sized Restricted Boltzmann Machines that share parameters. Sparse Bayesian
learning [6] is incorporated to guide the topic model towards learning more predictive topics which
are associated with sparse classifier weights. We refer to those topics corresponding to non-zero
weights as ?relevance? topics. Meanwhile, binary stochastic units in Replicated Softmax are replaced by rectified linear units [7], which allows each unit to express more information for better
1
(a) Wedding Dance
(b) Birthday Party
(c) Graduation Ceremony
Figure 1: Example videos of the ?Wedding Dance?, ?Birthday Party? and ?Graduation Ceremony?
classes taken from the USAA dataset [4].
explaining video data containing complex content and also makes variational inference tractable for
the proposed model. Furthermore, by using a simple quadratic bound on the log-sum-exp function
[8], an efficient variational EM algorithm is developed for parameter estimation and inference. Our
model is able to be naturally extended to deal with multi-modal data without changing the learning
and inference procedures, which is beneficial for video classification tasks.
2
Related work
The problems of activity analysis and recognition have been widely studied. However, most of the
existing works [9, 10] were done on constrained videos with limited contents (e.g., clean background
and little camera motions). Complex activity recognition in web videos, such as social group activity,
is not much explored. Most relevant to our work is a recent work that learns video attributes to
analyze social group activity [4]. In [4], a semi-latent attribute space is introduced, which consists
of user-defined attributes, class-conditional and background latent attributes, and an extended Latent
Dirichlet Allocation (LDA) [11] is used to model those attributes as topics. Different from that, our
work discovers a set of discriminative latent topics without extra human annotations on videos.
From the view of graphical models, most similar to our model are the maximum entropy discrimination LDA (MedLDA) [12] and the generative Classification Restricted Boltzmann Machines (gClassRBM) [13], both of which have been successfully applied to document semantic analysis. MedLDA
integrates the max-margin learning and hierarchical directed topic models by optimizing a single
objective function with a set of expected margin constraints. MedLDA tries to estimate parameters and find latent topics in a max-margin sense, which is different from our model relying on the
principle of automatic relevance determination [14]. The gClassRBM used to model word count
data is actually a supervised Replicated Softmax. Different from the gClassRBM, instead of point
estimation of classifier parameters, our proposed model learns a sparse posterior distribution over
parameters within a Bayesian paradigm.
3
Models and algorithms
We start with the description of Replicated Softmax, and then by integrating it with sparse Bayesian
learning, propose the relevance topic model for videos. Finally, we develop an efficient variational
algorithm for inference and parameter estimation.
3.1
Replicated Softmax
The Replicated Softmax model is a two-layer undirected graphical model, which can be used to
model sparse word count data and extract latent semantic topics from document collections. Replicated Softmax allows for very efficient inference and learning, and outperforms LDA in terms of
both the generalization performance and the retrieval accuracy on text datasets.
As shown in Figure 2 (left), this model is a generalization of the restricted Boltzmann machine
(RBM). The bottom layer represents a multinomial visible unit sampled K times (K is the total
number of words in a document) and the top layer represents binary stochastic hidden units.
2
tr
h
W1
?c
...
W
W2
?c
v
y
v
C
Figure 2: Left: Replicated Softmax model: an undirected graphical model. Right: Relevance topic
model: a mixed graphical model. The undirected part models the marginal distribution of video
BoW vectors v and the directed part models the conditional distribution of video classes y given
latent topics tr by using a hierarchical prior on weights ? .
Let a word count vector v ? NN be the visible unit (N is the size of the vocabulary), and a binary
topic vector h ? {0, 1}F be the hidden units. Then the energy function of the state {v, h} is defined
as follows:
F
N
F
N X
X
X
X
Wij vi hj ?
ai vi ? K
bj hj ,
(1)
E(v, h; ?) = ?
i=1 j=1
i=1
j=1
where ? = {W, a, b}, Wij is the weight connected with vi and hj , ai and bj are the bias terms
of visible and hidden units respectively. The joint distribution over the visible and hidden units is
defined by:
XX
1
exp(?E(v, h; ?)), Z(?) =
exp(?E(v, h; ?)),
(2)
P (v, h; ?) =
Z(?)
v
h
where Z(?) is the partition function. Since exact maximum likelihood learning is intractable, the
contrastive divergence [15] approximation is often used to estimate model parameters in practice.
3.2
Relevance topic model
The relevance topic model (RTM) is an integration of sparse Bayesian learning and Replicated Softmax, the main idea of which is to jointly learn discriminative topics as mid-level video representations and sparse discriminant function as a video classifier.
We represent the video dataset with class labels y ? {1, ..., C} as D = {(vm , ym )}M
m=1 , where
each video is represented as a BoW vector v ? NN . Consider modeling video BoW vectors using
the Replicated Softmax. Let tr = [tr1 , ..., trF ] denotes a F-dimensional topic vector of one video.
According to Equation 2, the marginal distribution over the BoW vector v is given by:
1 X
P (v; ?) =
exp(?E(v, tr ; ?)),
(3)
Z(?) tr
Since videos contain more complex and diverse contents than documents, binary topics are far from
ideal to explain video data. We replace binary hidden units in the original Replicated Softmax with
rectified linear units which are given by:
trj
= max(0, tj ), P (tj |v; ?) = N (tj |Kbj +
N
X
Wij vi , 1),
(4)
i=1
where N (?|?, ? ) denotes a Gaussian distribution with mean ? and variance ? . The rectified linear
units taking nonnegative real values can preserve information about relative importance of topics.
Meanwhile, the rectified Gaussian distribution is semi-conjugate to the Gaussian likelihood. This facilitates the development of variational algorithms for posterior inference and parameter estimation,
which we describe in Section 3.3.
?y }C
Let ? = {?
y=1 denote a set of class-specific weight vectors. We define the discriminant function
as a linear combination of topics: F (y, tr , ? ) = ? Ty tr . The conditional distribution of classes is
3
defined as follows:
P (y|tr , ? ) = PC
exp(F (y, tr , ? ))
y 0 =1
exp(F (y 0 , tr , ? ))
,
(5)
and the classifier is given by:
y? = arg max E[F (y, tr , ? )|v].
(6)
y?C
The weights ? are given a zero-mean Gaussian prior:
?|?
?) =
P (?
C Y
F
Y
P (?yj |?yj ) =
y=1 j=1
C Y
F
Y
?1
N (?yj |0, ?yj
),
(7)
y=1 j=1
? y }C
where ? = {?
y=1 is a set of hyperparameter vectors, and each hyperparameter ?yj is assigned
independently to each weight ?yj . The hyperpriors over ? are given by Gamma distributions:
?) =
P (?
C Y
F
Y
C Y
F
Y
P (?yj ) =
y=1 j=1
?1 c c?1 ?d?
d ?yj e
,
?(c)
(8)
y=1 j=1
where ?(c) is the Gamma function. To obtain broad hyperpriors, we set c and d to small values,
e.g., c = d = 10?4 . This hierarchical prior, which is a type of automatic relevance determination
prior [14], enables the posterior probability of the weights ? to be concentrated at zero and thus
effectively switch off the corresponding topics that are considered to be irrelevant to classification.
Finally, given the parameters ?, RTM defines the joint distribution:
Y
Y
F
C Y
F
P (?yj |?yj )P (?yj ) .
P (tj |v; ?)
P (v, y, tr , ? , ? ; ?) = P (v; ?)P (y|tr , ? )
j=1
(9)
y=1 j=1
Figure 2 (right) illustrates RTM as a mixed graphical model with undirected and directed edges.
The undirected part models the marginal distribution of video data and the directed part models the
conditional distribution of classes given latent topics. We can naturally extend RTM to Multimodal
RTM by using the undirected part to model the multimodal data v = {vmodl }L
l=1 . Accordingly,
QL
modl modl
P (v; ?) in Equation 9 is replaced with l=1 P (v
;?
). In Section 3.3, we can see that it will
not change learning and inference rules.
3.3
Parameter estimation and inference
For RTM, we wish to find parameters ? = {W, a, b} that maximize the log likelihood on D:
Z
M
?d?
?,
log P (D; ?) = log P ({vm , ym , trm }M
(10)
m=1 , ? , ? ; ?)d{tm }m=1 d?
?, ? |D; ?) = P (?
?, ? , D; ?)/P (D; ?). Since exactly computand learn the posterior distribution P (?
ing P (D; ?) is intractable, we employ variational methods to optimize a lower bound L on the log
likelihood by introducing a variational distribution to approximate P ({tm }M
m=1 , ? , ? |D; ?):
Y
M Y
F
?)q(?
?).
Q({tm }M
q(tmj ) q(?
(11)
m=1 , ? , ? ) =
m=1 j=1
Using Jensens inequality, we have:
Z
log P (D; ?) > Q({tm }M
m=1 , ? , ? )
Q
M
r
?|?
?)P (?
?)
P
(v
;
?)P
(y
|t
,
?
)P
(t
|v
;
?)
P (?
m
m
m
m
m
m=1
?d?
?. (12)
d{tm }M
log
m=1 d?
Q({tm }M
m=1 , ? , ? )
Note that P (ym |trm , ? ) is not conjugate to the Gaussian prior, which makes it intractable to compute
?) and q(tmj ). Here we use a quadratic bound on the log-sum-exp (LSE)
the variational factors q(?
function [8] to derive a further bound. We rewrite P (ym |trm , ? ) as follows:
T
P (ym |trm , ? ) = exp(ym
Trm ? ? lse(Trm ? )),
4
(13)
where Trm ? = [(trm )T ? 1 , ..., (trm )T ? C?1 ], ym = I(ym = c) is the one-of-C encoding of class
PC?1
label ym and lse(x) , log(1 + y0 =1 exp(xy0 )) (we set ? C = 0 to ensure identifiability). In [8],
the LSE function is expanded as a second order Taylor series around a point ? , and an upper bound
?) with a fixed matrix A = 12 [IC ? ? C ?1+1 1C ? 1TC ? ]
is found by replacing the Hessian matrix H(?
?
?), where C = C ? 1, IC ? is the identity matrix of size M ? M and 1C ? is a
such that A H(?
M -vector of ones. Thus, similar to [16], we have:
1
T
log P (ym |trm , ? ) > J(ym , trm , ? , ? m ) = ym
Trm ? ? (Trm ? )T ATrm ? + sTm Trm ? ? ?i ,
2
?m ? exp(?
?m ? lse(?
?m )),
sm = A?
1
?m ? ?Tm exp(?
?m ? lse(?
?m )) + lse(?
?m ),
?i = ?Tm A?
2
(14)
(15)
(16)
?
where ? m ? RC is a vector of variational parameters. Substituting J(ym , trm , ? , ? m ) into Equation
11, we can obtain a further lower bound:
log P (D; ?) > L(?, ? ) =
M
X
log P (vm ; ?) + EQ
m=1
+
M
X
X
M
J(ym , trm , ? , ? m )
m=1
?|?
?) + log P (?
?) ? Q({tm }M
log P (tm |vm ; ?) + log P (?
,
?
,
?
)
. (17)
m=1
m=1
Now we convert the problem of model training into maximizing the lower bound L(?, ? ) with
?), q(?
?) and q(t) = {q(tmj )} as well as the parameters
respect to the variational posteriors q(?
?m }. We can give some insights into the objective function L(?, ? ): the first term
? and ? = {?
is exactly the marginal log likelihood of video data and the second term is a variational bound of
the conditional log likelihood of classes, thus maximizing L(?, ? ) is equivalent to finding a set of
model parameters and latent topics which could fit video data well and simultaneously make good
predictions for video classes.
Due to the conjugacy properties of the chosen distributions, we can directly calculate free-form
?), q(?
?) and parameters ? :
variational posteriors q(?
?) = N (?
?|E? , V? ),
q(?
?) =
q(?
F
C Y
Y
(18)
Gamma(?yj |?
c, d?yj ),
(19)
y=1 j=1
? = hTrm iq(t) E? ,
(20)
where h?iq denotes an exception with respect to the distribution q and
V? =
X
M
r T
(Tm ) ATrm q(t) + diagh?yj iq(??)
?1
, E? = V?
m=1
M
X
(Trm )T
q(t)
(ym + sm ),
m=1
(21)
1
2
1
.
c? = c + , d?yj = d + ?yj
?)
q(?
2
2
(22)
For q(t), the calculation is not immediate because of the rectification. Inspired by [17], we have the
following free-form solution:
q(tmj ) =
?pos
?neg
2
2
N (tmj |?pos , ?pos
)u(tmj ) +
N (tmj |?neg , ?neg
)u(?tmj ),
Z
Z
(23)
where u(?) is the unit step function. See Appendix A for parameters of q(tmj ).
Given ?, through repeating the updates of Equations 18-20 and 23 to maximize L(?, ? ), we can
?), q(?
?) and q(t). Then given q(?
?), q(?
?) and q(t), we estimate
obtain the variational posteriors q(?
? by using stochastic gradient descent to maximize L(?, ? ), and the derivatives of L(?, ? ) with
5
respect to ? are given by:
M
N
X
r
1 X
?L(?, ? )
r
= vi tj data ? vi tj model +
vmi htmj iq(t) ?
Wij vmi ? Kbj ,
?Wij
M m=1
i=1
?L(?, ? )
= hvi idata ? hvi imodel ,
?ai
M
N
X
r
?L(?, ? )
r
K X
= tj data ? tj model +
htmj iq(t) ?
Wij vmi ? Kbj ,
?bj
M m=1
i=1
where the derivatives of
PM
m=1
(24)
(25)
(26)
log P (vm ; ?) are the same as those in [5].
This leads to the following variational EM algorithm:
?), q(?
?) and q(t).
E-step: Calculate variational posteriors q(?
M-step: Estimate parameters ? = {W, a, b} through maximizing L(?, ? ).
These two steps are repeated until L(?, ? ) converges. For the Multimodal RTM learning, we just
additionally calculate the gradients of ?modl for each modality l in the M-step while the updating
rules are not changed.
After the learning is completed, according to Equation 6 the prediction for new videos can be easily
obtained:
y? = arg max ? Ty q(??) htr ip(t|v;?) .
(27)
y?C
4
Experiments
We test our models on the Unstructured Social Activity Attribute (USAA) dataset 1 for social group
activity recognition. Firstly, we present quantitative evaluations of RTM in the case of different
modalities and comparisons with other supervised topic models (namely MedLDA and gClassRBM). Secondly, we compare Multimodal RTM with some baselines in the case of plentiful and
sparse training data respectively. In all experiments, the contrastive divergence is used to efficiently
approximate the derivatives of the marginal log likelihood and the unsupervised training on Replicated Softmax is used to initialize ?.
4.1
Dataset and video representation
The USAA dataset consists of 8 semantic classes of social activity videos collected from the Internet.
The eight classes are: birthday party, graduation party, music performance, non-music performance,
parade, wedding ceremony, wedding dance and wedding reception. The dataset contains a total
of 1466 videos and approximate 100 videos per-class for training and testing respectively. These
videos range from 20 seconds to 8 minutes averaging 3 minutes and contain very complex and
diverse contents, which brings significant challenges for content analysis.
Each video is represented using three modalities, i.e., static appearance, motion, and auditory.
Specifically, three visual and audio local keypoint features are extracted for each video: scaleinvariant feature transform (SIFT) [18], spatial-temporal interest points (STIP) [19] and melfrequency cepstral coefficients (MFCC) [20]. Then the three features are collected into a BoW
vector (5000 dimensions for SIFT and STIP, and 4000 dimensions for MFCC) using a soft-weighting
clustering algorithm, respectively.
4.2
Model comparisons
To evaluate the discriminative power of video topics learned by RTM, we present quantitative classification results compared with other supervised topic models (MedLDA and gClassRBM) in the
case of different modalities. We have tried our best to tune these compared models and report the
best results.
1
Available at http://www.eecs.qmul.ac.uk/?yf300/USAA/download/.
6
Table 1: Classification accuracy of different supervised topic models for single-modal features.
Feature
SIFT
Model
Accuracy
(%)
20 topics
30 topics
40 topics
50 topics
60 topics
Med
LDA
44.72
44.17
43.07
42.80
40.74
gClass
RBM
45.40
46.11
47.08
46.81
49.72
STIP
RTM
51.99
53.09
55.83
54.17
54.03
Med
LDA
37.28
38.93
40.85
39.75
41.54
gClass
RBM
42.39
42.25
42.39
41.70
43.35
MFCC
RTM
48.29
49.11
50.62
51.71
51.17
Med
LDA
34.71
38.55
41.15
41.98
38.27
gClass
RBM
41.70
43.62
45.00
44.31
43.48
RTM
45.35
46.67
48.15
47.46
47.33
Table 2: Classification accuracy of different methods for multimodal features.
Method
100 Inst
Accuracy
(%)
10 Inst
Multimodal RTM
60 topics
90 topics
120 topics
150 topics
180 topics
60 topics
90 topics
120 topics
150 topics
180 topics
60.22
62.69
63.79
64.06
64.72
38.68
41.29
43.48
43.72
44.99
RS+SVM
54.60
56.10
57.34
59.26
60.63
23.73
28.53
30.59
33.47
35.94
Direct
SVM-UD+LR
SLAS+LR
66.0
65.0
65.0
29.0
37.0
40.0
Table 1 shows the classification accuracy of different models for three single-modal features: SIFT,
STIP and MFCC. We can see that RTM achieves higher classification accuracy than MedLDA and
gClassRBM in all cases, which demonstrates that through leveraging sparse Bayesian learning to
incorporate class label information into topic modeling, RTM can find more discriminative topical
representations for complex video data.
4.3
Baseline comparisons
We compare Multimodal RTM with the baselines in [4] which are the best results on the USAA
dataset:
Direct Direct SVM or KNN classification on raw video BoW vectors (14000 dimensions), where
SVM is used for experiments with more than 10 instances and KNN otherwise.
SVM-UD+LR SVM attribute classifiers learn 69 user-defined attributes, and then a logistic regression (LR) classifier is performed according to the attribute classifier outputs.
SLAS+LR Semi-latent attribute space is learned, and then a LR classifier is performed based on the
69 user-defined, 8 class-conditional and 8 latent topics.
Besides, we also perform a comparison with another baseline where different modal topics extracted
by Replicated Softmax are connected together as video representations, and then a multi-class SVM
classifier [21] is learned from the representations. This baseline is denoted by RS+SVM.
The results are illustrated in Table 2. Here the number of topics of each modality is assumed to be
the same. When the labeled training data is plentiful (100 instances per class), the classification performance of Multimodal RTM is similar to the baselines in [4]. However, We argue that our model
learns a lower dimensional latent semantic space which provides efficient video representations and
is able to be better generalized to a larger or new dataset because extra human defined concepts are
not required in our model. When considering the classification scenario where only a very small
number of training data are available (10 instances per class), Multimodal RTM can achieve better
performance with an appropriate number (e.g., > 90) of topics because the sparsity of relevance topics learned by RTM can effectively prevent overfitting to specific training instances. In addition, our
model outperforms RS+SVM in both cases, which demonstrates the advantage of jointly learning
topics and classifier weights through sparse Bayesian learning.
It is also interesting to examine the sparsity of relevance topics. Figure 3 illustrates the degree of
correlation between topics and two different classes. We can see that the learned relevance topics are
very sparse, which leads to good generalisation for new instances and robustness for small datasets.
7
Figure 3: Relevance topics discovered by RTM for two different classes. Vertical axis indicates the
degree of positive and negative correlation.
5
Conclusion
This paper has proposed a supervised topic model, the relevance topic model (RTM), to jointly learn
discriminative latent topical representations and a sparse classifer for recognizing unstructured social group activity. In RTM, sparse Bayisian learning is integrated with an undirected topic model
to discover sparse relevance topics. Rectified linear units are employed to better fit complex video
data and facilitate the learning of the model. Efficient variational methods are developed for parameter estimation and inference. To further improve video classification performance, RTM is also
extended to deal with multimodal data. Experimental results demonstrate that RTM can find more
predictive video topics than other supervised topic models and achieve state of the art classification
performance, particularly in the scenario of lacking labeled training videos.
Appendix A. Parameters of free-form variational posterior q(tmj )
The expressions of parameters in q(tmj ) (Equation 23) are listed as follows:
2
2
?pos = N (?|?, ? + 1), ?pos
= (? ?1 + 1)?1 , ?pos = ?pos
(
?
+ ?),
?
2
?neg = N (?|0, ?), ?neg
= 1, ?neg = ?,
1
??pos
1
?neg
Z = ?pos erfc q
+ ?neg erfc q
,
2
2
2
2
2?pos
2?neg
where erfc(?) is the complementary error function and
+
P
*
? ?j ym + sm ? j 0 6=j ? ?j 0 Atrmj 0
?=
,
??j A?
?T?j
?)q(t)
q(?
N
D
E?1
X
?T?j
? = ? ?j A?
, ?=
Wij vmi + Kbj .
?)
q(?
(28)
(29)
(30)
(31)
(32)
i=1
We can see that q(tmj ) depends on expectations over ? and {tmj 0 }j 0 6=j , which is consistent with the
graphical model representation of RTM in Figure 2.
Acknowledgments
This work was supported by the National Basic Research Program of China (2012CB316300), Hundred Talents Program of CAS, National Natural Science Foundation of China (61175003, 61135002,
61203252), and Tsinghua National Laboratory for Information Science and Technology Crossdiscipline Foundation.
References
[1] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: a local svm approach. In
ICPR, 2004.
8
[2] M. Rodriguez, J. Ahmed, and M. Shah. Action mach a spatio-temporal maximum average
correlation height filter for action recognition. In CVPR, 2008.
[3] Ucf50 action dataset. ?http://vision.eecs.ucf.edu/data/ucf50.rar?.
[4] Y.W. Fu, T.M. Hospedales, T. Xiang, and S.G. Gong. Attribute learning for understanding
unstructured social activity. In ECCV, 2012.
[5] R. Salakhutdinov and G.E. Hinton. Replicated softmax: an undirected topic model. In NIPS,
2009.
[6] M.E. Tipping. Sparse bayesian learning and the relevance vector machine. JMLR, 2001.
[7] V. Nair and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. In
ICML, 2010.
[8] D. Bohning. Multinomial logistic regression algorithm. AISM, 1992.
[9] P. Turaga, R. Chellappa, V.S. Subrahmanian, and O. Udrea. Machine recognition of human
activities: a survey. TCSVT, 2008.
[10] J. Varadarajan, R. Emonet, and J.-M. Odobez. A sequential topic model for mining recurrent
activities from long term video logs. IJCV, 2013.
[11] D. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. JMLR, 2003.
[12] J. Zhu, A. Ahmed, and E.P. Xing. Medlda: Maximum margin supervised topic models. JMLR,
2012.
[13] H. Larochelle, M. Mandel, R. Pascanu, and Y. Bengio. Learning algorithms for the classification restricted boltzmann machine. JMLR, 2012.
[14] R.M. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.
[15] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 2002.
[16] K.P. Murphy. Machine learning: a probabilistic perspective. MIT Press, 2012.
[17] M. Harva and A. Kaban. Variational learning for rectified factor analysis. Signal Processing,
2007.
[18] D.G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
[19] I. Laptev. On space-time interest points. IJCV, 2005.
[20] B. Logan. Mel frequency cepstral coefficients for music modeling. ISMIR, 2000.
[21] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector learning for interdependent and structured output spaces. In ICML, 2004.
9
| 4979 |@word r:3 tried:1 contrastive:3 tr:13 plentiful:2 wedding:6 series:1 contains:1 document:4 outperforms:3 existing:1 visible:4 partition:1 hofmann:1 enables:1 update:1 discrimination:1 generative:1 accordingly:1 short:1 lr:6 blei:1 detecting:1 provides:1 pascanu:1 toronto:1 firstly:1 height:1 rc:1 direct:3 consists:2 ijcv:3 expected:1 examine:1 multi:3 inspired:1 relying:1 salakhutdinov:1 automatically:1 little:1 considering:1 stm:1 discover:3 underlying:1 xx:1 developed:2 finding:1 temporal:2 quantitative:2 tackle:1 growth:1 exactly:2 classifier:12 demonstrates:2 uk:1 unit:16 positive:1 local:2 tsinghua:1 encoding:1 mach:1 birthday:4 reception:1 studied:1 china:2 challenging:1 limited:1 range:1 directed:4 acknowledgment:1 camera:1 yj:16 testing:1 practice:1 procedure:1 word:6 integrating:1 mandel:1 varadarajan:1 altun:1 tsochantaridis:1 context:1 optimize:1 equivalent:1 www:1 center:1 maximizing:3 odobez:1 independently:1 survey:1 unstructured:8 rule:2 insight:1 fang:2 slas:2 tan:1 nlpr:1 user:3 exact:1 recognition:7 particularly:2 utilized:1 updating:1 labeled:5 bottom:1 wang:1 solved:1 calculate:3 connected:2 rewrite:1 laptev:2 predictive:2 classifer:1 upon:1 distinctive:1 multimodal:10 joint:2 po:10 easily:1 represented:2 describe:1 chellappa:1 ceremony:4 larger:2 widely:1 cvpr:1 otherwise:1 knn:2 jointly:4 transform:1 scaleinvariant:1 ip:1 online:1 advantage:1 melfrequency:1 propose:3 interaction:1 product:1 relevant:2 bow:8 achieve:2 academy:1 description:2 converges:1 object:1 help:1 derive:1 develop:1 ac:2 gong:1 iq:5 recurrent:1 eq:1 larochelle:1 attribute:12 filter:1 stochastic:3 human:7 rar:1 graduation:4 generalization:2 secondly:1 around:1 considered:1 ic:2 exp:11 bj:3 substituting:1 achieves:2 hvi:2 estimation:7 integrates:1 bag:1 label:4 successfully:1 mit:1 gaussian:5 hj:3 focus:1 joachim:1 likelihood:7 indicates:1 baseline:6 sense:1 inst:2 inference:11 nn:2 integrated:1 hidden:5 wij:7 arg:2 classification:19 denoted:1 development:1 art:2 softmax:15 constrained:1 integration:1 marginal:5 initialize:1 spatial:1 ng:1 represents:2 broad:1 unsupervised:1 icml:2 report:1 intelligent:1 few:1 employ:1 simultaneously:2 divergence:3 preserve:1 gamma:3 national:3 murphy:1 replaced:2 explosive:1 interest:2 mining:1 evaluation:1 pc:2 tj:8 edge:1 fu:1 taylor:1 logan:1 instance:5 modeling:3 soft:1 introducing:1 hundred:1 recognizing:2 xy0:1 rtm:26 eec:2 stip:4 probabilistic:1 vm:5 off:1 ym:16 together:1 w1:1 thesis:1 containing:2 huang:1 tr1:1 cb316300:1 expert:1 zhao:2 derivative:3 automation:1 coefficient:2 modl:3 vi:6 depends:1 performed:2 view:1 try:1 lowe:1 analyze:1 start:1 xing:1 annotation:1 identifiability:1 accuracy:8 variance:1 efficiently:1 bayesian:9 raw:1 mfcc:4 rectified:8 explain:2 ty:2 energy:1 frequency:1 naturally:2 associated:1 rbm:4 static:1 sampled:1 auditory:1 dataset:11 actually:1 higher:1 tipping:1 supervised:9 modal:4 done:1 trm:17 furthermore:1 just:1 until:1 correlation:3 schuldt:1 web:4 expressive:1 replacing:1 lack:1 rodriguez:1 defines:1 logistic:2 brings:1 lda:6 innate:1 facilitate:1 contain:3 concept:3 assigned:1 laboratory:1 semantic:7 illustrated:1 deal:2 neal:1 mel:1 generalized:2 demonstrate:1 motion:3 lse:7 image:1 variational:19 novel:1 discovers:1 common:1 multinomial:2 extend:1 refer:1 significant:1 hospedales:1 ai:3 automatic:4 talent:1 pm:1 ucf:2 posterior:9 recent:1 tmj:13 perspective:1 optimizing:1 irrelevant:1 scenario:2 inequality:1 binary:5 neg:9 employed:1 paradigm:1 maximize:3 ud:2 signal:1 semi:4 keypoints:1 ing:1 imodel:1 determination:2 calculation:1 ahmed:2 long:1 retrieval:1 prediction:3 regression:2 basic:1 vision:1 expectation:1 represent:1 background:2 addition:1 modality:5 extra:2 w2:1 specially:1 med:3 undirected:10 facilitates:1 leveraging:1 jordan:1 ideal:1 bengio:1 switch:1 fit:2 idea:1 cn:1 tm:11 expression:1 emonet:1 hessian:1 hardly:1 action:5 listed:1 tune:1 repeating:1 mid:3 clip:1 concentrated:1 http:2 bagof:1 per:3 diverse:2 hyperparameter:2 medlda:7 express:1 group:8 kbj:4 changing:1 idata:1 prevent:1 clean:1 sum:2 convert:1 powerful:1 family:1 ismir:1 appendix:2 bound:8 layer:3 internet:1 quadratic:2 tieniu:1 nonnegative:1 activity:18 constraint:1 qmul:1 expanded:1 structured:1 according:3 turaga:1 icpr:1 combination:1 conjugate:2 vmi:4 beneficial:1 em:3 y0:1 trj:1 restricted:5 indexing:1 invariant:1 taken:1 rectification:1 equation:6 conjugacy:1 count:3 tractable:2 available:3 eight:1 hyperpriors:2 hierarchical:3 appropriate:1 robustness:1 shah:1 original:1 top:1 dirichlet:2 denotes:3 ensure:1 completed:1 graphical:6 clustering:1 music:3 chinese:1 erfc:3 objective:2 parade:1 ucf50:3 gradient:2 kth:1 topic:72 argue:1 collected:2 discriminant:2 besides:1 minimizing:1 liang:1 difficult:1 ql:1 negative:1 boltzmann:5 perform:1 upper:1 vertical:1 datasets:3 sm:3 descent:1 immediate:1 extended:3 incorporated:2 hinton:3 topical:2 discovered:1 tnt:1 download:1 introduced:1 namely:1 required:1 learned:5 nip:1 able:2 perception:1 sparsity:2 challenge:1 program:2 built:1 max:5 video:54 ia:1 suitable:1 power:2 natural:1 zhu:1 improve:2 technology:1 keypoint:1 axis:1 extract:2 text:1 prior:5 understanding:1 interdependent:1 relative:1 xiang:1 lacking:1 mixed:2 interesting:1 limitation:1 allocation:2 foundation:2 degree:2 consistent:1 principle:1 classifying:1 share:1 eccv:1 changed:1 supported:1 free:3 guide:1 bias:1 bohning:1 institute:1 explaining:1 taking:1 cepstral:2 sparse:18 dimension:3 vocabulary:1 collection:1 replicated:15 party:5 far:1 social:12 approximate:3 kaban:1 overfitting:1 assumed:1 spatio:1 discriminative:5 search:1 latent:15 table:4 additionally:1 learn:6 ca:1 caputo:1 complex:9 meanwhile:2 main:1 repeated:1 complementary:1 harva:1 wish:1 jmlr:4 weighting:1 htr:1 learns:3 minute:2 specific:2 sift:4 jensen:1 explored:1 svm:10 intractable:3 sequential:1 effectively:2 importance:1 phd:1 illustrates:2 subrahmanian:1 margin:4 gap:1 entropy:1 tc:1 appearance:1 visual:2 sport:1 trf:1 extracted:2 nair:1 conditional:6 viewed:1 sized:1 identity:1 towards:1 replace:1 content:6 change:1 specifically:1 generalisation:1 averaging:1 called:1 total:2 experimental:2 meaningful:1 exception:1 support:1 relevance:18 incorporate:1 evaluate:1 audio:1 dance:4 |
4,396 | 498 | Adaptive Soft Weight Tying
using Gaussian Mixtures
Steven J. Nowlan
Geoffrey E. Hinton
Computational Neuroscience Laboratory
The Salk Institute, P.O . Box 5800
San Diego, CA 92186-5800
Department of Computer Science
. Uni versi ty of Toran to
Toronto, Canada M5S lA4
Abstract
One way of simplifying neural networks so they generalize better is to add
an extra t.erm 10 the error fUll c tion that will penalize complexit.y. \Ve
prop ose a new pe nalt.y t.erm in which the dist rihution of weight values
is modelled as a mixture of multiple gaussians . Cnder this model, a set
of weights is simple if the weights can be clustered into subsets so that
the weights in each cluster have similar values . We allow the parameters
of the mixture model to adapt at t.he same time as t.he network learns.
Simulations demonstrate that this complexity term is more effective than
previous complexity terms.
1
Introduction
A major problem in training artificial nellral network:> is to ellsure t.hat th ey wIll
gel/eraiIze well to ra...,f'~ thaI they h(lvl> 1I0t been tralHeu OIl. SUIlle recellt t.heuretical
results (Baurn anu Iiallssier. 10S~I) Itav e :,.ug,g,e~teU that ill order to guaralltee goou
generalizatioll Ilw <IIIIOllnl of lllforillatiull requireJ te. dlr"L"t1~ "p~'c if~ Ihe Ulltput
vectors of all t.he t rallllng casps ll11l..;t he considerahly larg,t>r than t hI' lllllTlber of
independellt weight:,. III th e ll t' twork
III 1I1any practI cal problt'lllS there IS only
a small amount of labelled data available for traming and this creates problellls
for any approach that uses a large. homogeneous network with many indepeIldent
weights. As a result. there has been much recent int.erest in techniques that can
train large networks wil h relatively small amounts of labelled data and still provide
good generalization performance .
In order to improve generalization, t.he number of free parameters in the network
must be reduced. Olle of the oldest and simplest approaches to removing excess
degrees of fr eedolll from a net work i~ to add an ext fa term 10 the error [Ullct 1011
993
994
Nowlan and Hinton
that penalizes complexity:
cost = data-misfit + A complexity
(1)
During learning, the network is trying to find a locally optimal trade-off between
the data-misfit (the usual error term) and the complexity of the net. The relative
importance of these two terms can be estimated by finding the value of A that
optimizes generalization to a validation set. Probably the simplest approximation
to complexity is the sum of the squares of the weights,
Differentiating
this complexity measure leads to simple weight decay (Plaut, Nowlan and Hinton,
1986) in which each weight decays towards zero at a rate that is proportional to its
magnitude. This decay is countered by the gradient of the error term, so weights
which are not critical to network performance, and hence always have small error
gradients, decay away leaving only the weights necessary to solve the problem.
Li w;.
The use of a Li'IV; penalty term can also be interpreted from a Bayesian
perspective. l The "complexity" of a set of weights,
may be described
as its negat.ive log probahilit.y dellsit.y under a radially symmetric gaussian prior
distribution on the weights. The distribution is centered at the origin and has variance 1/A. For multilayer networks, it is hard to find a good theoretical justificatioll
for this prior, but Hinton (1987) justifies it empirically by sllOwiug tllat it greatly
improves generalizat.ioll on a very difficult, task. MOI'e recently, Mackay (1991) has
shown that even better generalization can be achieved by using different values of
A for the weights in different layers.
ALi w;,
2
A more conlplex measure of network complexity
If we wish to eliminate small weights without forcing large weights away from the
values they lleed to model the data, we can use a prior which is a mixture of a
narrow (n) and a broad (b) gaussian, both centered at zero.
p(w) = tr n
1
yI2;
e
-5 + trb ~
1-6e
2"n
27rl1 n
27rl1 b
b
(2)
where tr n and trb are the mixing proportions of the two gaussians and are therefore
constrained to sum to l.
Assuming that. the weight values were generated from a gaussian mixture, the conditional probability that a particular weight, Wi, was generated by a particular
gaussian, j, is called the responsibility of that gaussian fOI' the weight and is:
(3)
where Pj(Wj) is the probahilit.y density of Wi under gaussian j.
When the mixing proportions of t.lw two gatlssians are comparable, t.he llal'l'OW gaussian gets most of the responsibilit.y for a small weight. Adopting the Bayesiall perspective, the cost of a weight under the narrow gaussian is proportional to w 2 /2l1~.
As long as l1 n is quite small there will be strong pressure to reduce the magnitude
1 R.
Szeliski, personal communication, 1985.
Adaptive Soft Weight Tying using Gaussian Mixtures
of small weights even further. Conversely, the broad gaussian takes most of the
responsibility for large weight values, so there is much less pressure to reduce them.
In the limiting case when the broad gaussian becomes a unifonTI distribution, there
is almost no pressure to reduce very large weights because they are almost certainly
generated by the uniform distribution. A complexity term very similar to this limiting case is used in t.he "weight elimination" technique of CWeigend, Huberman and
Rumelhart, 1990) to improve generalization for a time series prediction task. 2
3
Adaptive Gaussian Mixtures and Soft Weight-Sharing
A mixture of a narrow, zero-mean gaussian with a broad gaussian Or a uniform allows
us to favor networks with many near-zero weights, and this improves generalization
on many tasks. But practical experience with hand-coded weight constraints has
also shown that great improvements can be achieved by constraining particular
subsets of the weights t.o share the same value (Lang, '-\Taibel and Hinton, 1990; Le
Cun, 1989). Mixtures of zero-mean gaussians and uniforms canllot implement this
type of symllletry constraint. If however, we use multiple gaussians and allow their
means and variances to adapt as t.lw lIetwol?k learns, we call implemellt a "soft"
version of weight.-sharing III which the leawing algoritlllll decides for itself which
weights should be t.ied together. (We may also allow the lllixillg, proportiolls to
adapt so that. we are 1I0t assulllillg all sets of tied weights al?e the sallle size.)
The basic idea is t.hat a gallssiall which takps responsibility for a subset of the
weights will squeeze those weight.s t.ogether since it can then have a lower variance
and assign a higher probability dellsit.y t.o each weight. If t.he gaussialls all start
with high variallce, the initial division of weights into subsets will be very soft . As
the variances shrink and the network learns, the decisions about how to group the
weights iuto subsets are influenced by the task the network is learning t.o perforul.
To make t.hese intuit.ive ideas a bit more concrete, \ve may define a cost function of
the general form given in (1):
(4)
0";
where
is the variance of the squared error and each Pj (wd is a gaussian density
with mean /1j and standard deviation O"j. \Ve optimize this function by adjusting
the Wi and the mixture parameters 1fj, /1j, and O"j, and O"y.3
The partial derivative of C with respect to each weight is the sum of the usual
squared error derivative and a term due to the complexity cost for the weight:
(5)
2See (N owl au, 1991) for a precise descri pt iOll of t.he rela.tionshi p bet.weeu rni xture models
and the model Ilsed by (Weigend. Huherman a.nd Rllmelltart. 1990).
Jl/a~ lllay be tlLOUgltt of as playillg tlte sallle role a.s A ill equatiou 1 ill detcrminiug a
trade-off between the misfit. auo complexity costs . K is a 1I0rlllaiiting factor ba.sed 011 it
gaussia.u error LLlude!.
995
996
Nowlan and Hinton
Method
Vanilla Back Prop.
Cross Valid.
Weight Elimination
Soft-share - 5 Compo
Soft-share - 10 Compo
Train % Correct
100.0 ? 0.0
98.8 ? 1.1
100 .0 ? 0.0
100.0 ? 0.0
100.0 ? 0.0
Test % Correct
67.3 ? 5.7
83.5 ? 5.1
89.8 ? 3.0
95.6 ? 2.7
97.1 ? 2.1
Table 1: SUllllllal'y of generalization performance of 5 different training techniques
on the shift detection problem.
The derivative of the complexity cost term is simply a weighted sum of the difference
between the weight value and the center of each of the gaussians. The weighting
factors are the responsibility measures defined in equation 3 and if over time a
single gaussian claims most of the responsibility for a particular weight the effect
of the complexity cost t.erm is simply to pull the weight towards the center of the
responsible gaussian. The strength of this force is inversely proport.ional to the
variance of the gaussian.
In the simulations described below, all of the parameters (Wi, Pj, (Jj, 7rj) are updated
simultaneously using a conjugate gradient descent procedure. To prevent variances
shrinking too fast or going negative we optimize log (Jj rather than (Jj. To ensure
that the mixing proportions sum t.o 1 and are positive, we optimize Xj where trj =
exp(xj)/ L exp(x/,;). For furtiter details see (Nowlan and Hinton, 1992).
4
SilTIulation Results
V>le compared the gelleralization performance of soft weight-tying to other techniques on two different. problems. The first problem, a 20 input., one output shift
detection !letwork, vvas chosell because it was biJlary problem for which solutiotls
which generalize well exhibit a lot. of repeat.ed weight structure. The generalizatioll
perfOrlllallCt? of lwtworks trailled using, the co:st Cl"it.erion giveJl ill equation 4 was
compared to Ilet.works t.rained in three other ways: No cost term to penalize complexity; No explicit complexity cost. term, but use of a validat.ion set to terminate
learning; Weight elimination (Wf'igelld, Huberman a.nd Rumelhart, 1990)4. The
simulation results art' sllmmarized in Table 1.
The network had 20 input units, 10 hidden units, and a single output unit and
contained 101 weights. The first 10 input units in this network were given a random
binary pattern, and the second group of 10 input units were given the same pattern
circularly shifted by 1 bit left or right. The desired output of the network was +1
for a left shift. and -1 for a right shift. A data set of 2400 patterns was created by
randomly generating a 10 bit string, and choosing with equal probability to shift
the string left or right. The data set was divided into 100 training cases, 1000
validation cnses, and 1:WO t.est. cases. The training :set was deliberat.ely chosen to
be very small ? 5% of possible patterns) to explore the region in which complexity
penalties should have the largest. impa.ct . Ten simulations were performed with each
4With a fixed value of
>. chosen by cross-validation.
Adaptive Soft Weight Tying using Gaussian Mixtures
1,'
1.3
I.Z
1.1
0.9
0 .8
0.7
0.&
0.5
0 ??
0.3
0.2
o.
~-l=+7+=~~~+::=f=*B~:;::+:=+-'H+~\~=+=l~
1
- -4.5-4-l.5?~-2.5-2-1,5-1-o,5 0 0,5 11,5 2 2.5
l,5 ? ',5 5
Figure 1: Final mixture probability density for a typical solution to the shift detection problem. Five of the components in the mixture can be seen as distinct
bumps in the probabilit.y densit.y. Of the remaining five components, two have been
eliminated by having their mixing proportions go to zero and the other three are
very broad and form the baseline offset of the density function.
method, starting frolll ten difl?erent. initial weight sets (t.e. each method used the
same ten initial weight. configurations).
The final weight distl'ihlltiolls discovered by the soft weight-tyiug technique are
shown in Figlll'e 1. There is no significant component with mean O. The classical
assumpt.ioll t.hat. the nt't.work collt.aiw; a large lIulllber of illessellt.ial weight.s which
can be eliI1IilIated to ililprove generalizatioll is lIOt appropriate COL' this problelll aBd
network arcilitecture. Tilis may explaiu why the weight elimination model used
by 'Veigend ef af ('Veigend, Huberman and Rumelhart, 1990) performs relatively
poorly in this si tuation.
The second task chosen to evaluate the effectiveness of our complexity penalty was
the prediction of the yearly sunspot average from the averages of previous years .
This task has been well studied as a time-series prediction benchmark in the statistics literature (Priestley, 1991b; Priestley, 19910.) and has also been investigated by
(Weigend, Huberman and Rumelhart, 1990) using a complexity penalty similar to
the one discllssed in section 2.
The network archit.ect.me Llsed was identical to the one used in the study by VVeigend
et af: The Iwtwork had 1L input. unit.s which represent.ed the yearly average from til<-'
preceding lL years, 8 hidden unit.s, and a silIgle lillear output unit which represented
the predictioll for tlw averagl' Illllllhu' of SllIlSPOt.S ill t.he current year. Yearly
sunspot dat.a from l700 to uno wa:-; lIsed to train the lIetwork to perform this OllCstep prediction task, aud t.he evaluation of the network was based on data from
997
998
Nowlan and Hinton
Method
TAR
RBF
\VRH
Soft-share - 3 Compo
Soft-share - 8 Compo
Test arv
0.097
0.092
0.086
0.077 ? 0.0029
0.072 ? 0.0022
Table 2: Summary of average relat,ivp variance of 5 different models on the one-step
sunspot prediction problelll.
1921 to 1955. 5 The evaluation of prediction performance used the aver?age relative
variance (ar?v) measure discussed in (Weigend, Huberman and Rumelhart, 1990).
Simulations were performed using the same conjugate gradient method used for the
first problem. Complexity measures based on gaussian mixtures with;) and 8 components were used and ten simulat.ions were performed with each (USillg the same
training data but different initial weight configurations). The results of these simulations are summarized in Table 2 along with the best result obtailled by Weigend et
at (Weigend, H ubermall and RUlllelhart, 1990) (HI RH), the bilinear auto-regression
model of Tong and Lim (Tong ano Lim, 1980) (T A R) 6, and the multi-layer RBF
network of He and Lapeoes (lIe alld Lapedes, 1991) (RBF). All figure:::; represent
the arv on t.he t.est set. For the mixture complexity models, this is the average over
the ten simlllations, plus or minus one standard deviation.
Since the results for the models ot.her than the mixture complexity trained networks
are based on a single simulation it is difficult to assign statistical signifigance to the
differences shown in Table 2. We may note however, that the difference between
the 3 and 8 component mixture complexity models is significant (p > 0.95) and the
differences bet.ween t.he 8 componellt. model and the other models are much larger .
Figure 2 shows an 8 component mixture Blodel of the fillal weight distribution. It is
quite unlike t.he distribution ill Figure 1 and is actually quite close to a mixture of
two zero-meall gallssians, one hroad ano one lIarrow. This may explain why weight
elimination works quite well for t.his t.ask.
Weigeno el at point. Ollt that. fOJ" Lillie seri es preoiction tasks sllch as the SUllspot
task a mudl rnore int,(' resl.illg nlca:-;llI"t' of performance is th e ability of the Illouel to
preoict Illore thall aile t.illlt: st.ep into the fUl.LUe. On e way to appl"Oacll th e Illultistep prediction problem is to llse iterated szng/e-step predzctzon. In this method, the
predicted output is fed back as input fOI? the next preJictioll and all otlter illput
units have theil' values shifted back Olle unit. Thus t.he input. typically consists
of a combination of act.ual and preJicted values. \Vhen preuictillg more thaJl one
step int.o the future, the prediction error depends both on how ma.ny steps into the
future one is predicting (/) ano on what point in the time series the prediction
began. An appropriate enor measure for iterated prediction is the aVe1?age relaltve
I-times iter'ated pr'ediclion V(lT"wnce (\Veigend, Huberman and Rumelhart, 1990)
5The aut.hors thallk Andreas vVeigend for providing his version of this data.
6This was the morl el fa.vored b~1 Priestly (Pri estley, 1991a.) in a re cent evaluation of
classical stat.istical approaches 1.0 t.his t.ask.
Adaptive Soft Weight Tying using Gaussian Mixtures
1.9
1.8
1.7
1.&
1.5
1..
1.3
1.2
1.1
0.9
0.8
0.7
0.6
0.5
0 ??
0.3
0.2
0.1
3.5 ??. 5 5
Figure 2: Typical final mixture probability density for the SUllspot prediction pl'Oblem with a model containing 8 ruixt.llI'e components.
0.9
0.8
0./
O,r.
0.5
0 .4
0 ,1
0.2
0.1
Figure 3: A verage relative I-times iterated prediction variance versus number of
prediction iterations for t.he sunspot. time series from 1921 to 1955. Closed circles
represent the TAR model, opell circles the W RH model, closed sljuares the j
component complexity 1l10del, and opell squares the ~ componellt complexity lllodei.
Ten different set.s of initial weights were used for the 3 and 8 component complexity
models and one standard deviation error bars are shown.
999
1000
Nowlan and Hinton
which averages predi ctions I steps into the future over all possible starting points.
Using this measure , the performance of various models is shown in Figure 3.
5
Sun1mary
The simulations we have described provide evidence that the use of a more flexible
model for the distribution of weights in a network can lead to better generalization
performance than weight decay, weight elimination, or techniques that control the
learning time. The flexibility of our model is clearly demonstrated in the very different final weight distributions discovered for the two different problems investigated
in this paper. The a.bility to automatically adapt to individual problems suggests
that the method should ha.ve broad applicability.
Acknowledgements
This research was funded by the Ontario ITRC, the Canadian NSERC and the Howard
Hughes Medical Institute. Hiutoll is the Noranda fellow of the Calladiall lllstitute for
Advanced Research.
References
Baum , E. B. allo Hallssler. D.
Compu/a/ion.l :151 -- lGO .
(I()~~l) .
What size net gives vidid generalizat.ioJl ?
N ew 'o/
He, X. and Lapedes, A. (1991) . Nonlin ear modelling alld prediction by stlccessive approximation \Ising Radial Basis FUllctioJls. Techllical Report LA-U H.-Yl-lJ75, Los Alamos
National Laboratory.
Hinton, G. "8. (1987) . Learning t.rallslatioll invariant recognition in a massively parallel
network. In Pmc. Conf. Pam/ft'i Al'chitectw'es and Languages EU7"Ope, Eindhoveu .
Lang, K. J., Waibel, A. H., and Hint.on, G . E . (1990). A time-delay neural network
architecture for isolated word recognition. Neural Networks, 3:23-43.
Le Cun, Y . (1989) . Generalization and network design strategies. Technical Report CRGTR-89-4, University of Toronto .
MacKay, D. J. C. (1991). Bayesian Modelling and New'al Networ'ks. PhD thesis, Computation and Neural Systems, California Institute of Technology, Pasadena, CA .
Nowlan, S. J . (1991) . Soft Competitive Adaptation: New'al Network Learning Algorithms based on Fitting Sta.tistical Mixtures. PhD thesis , School of Computer Science,
Carnegie Melloll Uuiversit.y, Pittsburgh, PA.
Nowlan, S. J. and Hinton, G. E. (Hlq~) . Sirnplifyillg neural networks by soft weightsharing. New'al ComputatlOlI . III press.
Plaut, D. C ., Nowlall , S. J .. alld Hill/on, G . E. (lY86). Experimellt.s 0 11 learllillg by
back- propagation. Tech nical Report. CM U-CS-86-l'26, Carnegie-Melloll UIIi versi ty,
Pittsburgh PA 15213.
Priestley, M. B. (1991a.). Non-lineal' ond Non-stationa ?r y Time Senes Analysis. Acad emic
Press.
Priestley, M. B.
(1~~lb) .
Spectml AIHllY8is and Time Series . Academic Press.
Tong, H . and Lilli, 1\ . S. (1980) . Threshold autoregression, limit cycles, a.lld cyclical dat.a.
10'l.1mal Royal Stati" tical Society B, 42 .
Weigend, A. S., Huberman, B . A., alld Rurnelhart, D. E. (lY90). Predictillg the futur e: A
connectiollist approach . lllte,.,w/wllal lou.null of Neuml Systems, 1.
| 498 |@word version:2 llsed:1 proportion:4 nd:2 simulation:8 simplifying:1 pressure:3 tr:2 minus:1 veigend:3 initial:5 complexit:1 series:5 configuration:2 lapedes:2 usillg:1 current:1 wd:1 nt:1 nowlan:9 lang:2 si:1 must:1 lue:1 oldest:1 ial:1 compo:4 plaut:2 toronto:2 ional:1 five:2 along:1 istical:1 ect:1 consists:1 fitting:1 ra:1 dist:1 bility:1 multi:1 automatically:1 becomes:1 generalizat:2 null:1 what:2 tying:5 cm:1 interpreted:1 string:2 enor:1 finding:1 fellow:1 act:1 ful:1 control:1 unit:10 medical:1 t1:1 positive:1 limit:1 acad:1 ext:1 bilinear:1 plus:1 pam:1 au:1 studied:1 k:1 conversely:1 suggests:1 appl:1 co:1 practical:1 responsible:1 ond:1 hughes:1 implement:1 procedure:1 probabilit:1 vvas:1 word:1 radial:1 ilet:1 get:1 close:1 cal:1 optimize:3 moi:1 center:2 demonstrated:1 baum:1 go:1 starting:2 traming:1 pull:1 his:3 rl1:2 limiting:2 updated:1 diego:1 pt:1 homogeneous:1 us:1 origin:1 pa:2 rumelhart:6 recognition:2 ising:1 ep:1 steven:1 role:1 ft:1 difl:1 mal:1 wj:1 region:1 cycle:1 trade:2 nical:1 nellral:1 complexity:26 wil:1 thai:1 personal:1 hese:1 trained:1 ali:1 abd:1 networ:1 creates:1 division:1 basis:1 represented:1 various:1 resl:1 train:3 distinct:1 fast:1 effective:1 seri:1 artificial:1 choosing:1 quite:4 larger:1 solve:1 ive:2 simulat:1 favor:1 statistic:1 tlw:1 ability:1 itself:1 la4:1 final:4 net:3 fr:1 adaptation:1 mixing:4 poorly:1 flexibility:1 ontario:1 los:1 squeeze:1 cluster:1 generating:1 lvl:1 stat:1 erent:1 school:1 strong:1 predicted:1 c:1 aud:1 sllch:1 correct:2 centered:2 elimination:6 owl:1 assign:2 clustered:1 generalization:9 ied:1 ly90:1 pl:1 exp:2 great:1 uiii:1 claim:1 bump:1 major:1 weightsharing:1 largest:1 iojl:1 weighted:1 ivp:1 clearly:1 gaussian:22 always:1 rather:1 tar:2 bet:2 improvement:1 modelling:2 greatly:1 tech:1 baseline:1 wf:1 itrc:1 el:2 eliminate:1 typically:1 hidden:2 her:1 pasadena:1 going:1 oblem:1 ill:6 flexible:1 constrained:1 art:1 mackay:2 equal:1 having:1 eliminated:1 identical:1 broad:6 future:3 report:3 hint:1 sta:1 randomly:1 practi:1 ve:4 simultaneously:1 individual:1 national:1 cns:1 detection:3 evaluation:3 certainly:1 mixture:22 tical:1 partial:1 necessary:1 experience:1 iv:1 illg:1 penalizes:1 desired:1 re:1 circle:2 isolated:1 theoretical:1 ollt:1 soft:15 versi:2 ar:1 cost:9 applicability:1 deviation:3 subset:5 uniform:3 alamo:1 delay:1 too:1 st:2 density:5 gaussia:1 off:2 yl:1 together:1 concrete:1 squared:2 thesis:2 ear:1 containing:1 compu:1 conf:1 derivative:3 til:1 li:2 summarized:1 int:3 relat:1 ely:1 depends:1 ated:1 tion:1 performed:3 lot:1 responsibility:5 closed:2 start:1 competitive:1 parallel:1 sed:1 alld:4 square:2 variance:10 generalize:2 modelled:1 misfit:3 bayesian:2 iterated:3 foi:2 lli:2 m5s:1 explain:1 influenced:1 sharing:2 ed:2 ty:2 radially:1 adjusting:1 ask:2 lim:2 improves:2 actually:1 back:4 higher:1 illput:1 box:1 shrink:1 ano:3 hand:1 propagation:1 oil:1 effect:1 hence:1 symmetric:1 laboratory:2 vhen:1 pri:1 ll:2 during:1 trying:1 hill:1 demonstrate:1 arv:2 performs:1 l1:2 fj:1 priestley:4 ef:1 recently:1 began:1 ug:1 empirically:1 jl:1 discussed:1 he:18 significant:2 vanilla:1 language:1 had:2 funded:1 add:2 recent:1 perspective:2 optimizes:1 forcing:1 massively:1 binary:1 seen:1 aut:1 preceding:1 ey:1 teu:1 rained:1 ween:1 figlll:1 full:1 multiple:2 rj:1 llse:1 technical:1 adapt:4 af:2 cross:2 long:1 academic:1 divided:1 coded:1 prediction:14 basic:1 regression:1 multilayer:1 iteration:1 represent:3 adopting:1 achieved:2 ion:3 penalize:2 twork:1 leaving:1 extra:1 ot:1 unlike:1 probably:1 negat:1 nonlin:1 effectiveness:1 call:1 near:1 constraining:1 iii:4 assumpt:1 canadian:1 xj:2 ioll:3 architecture:1 dlr:1 reduce:3 idea:2 andreas:1 shift:6 rurnelhart:1 lgo:1 penalty:4 wo:1 aiw:1 jj:3 amount:2 locally:1 ten:6 densit:1 simplest:2 reduced:1 hlq:1 shifted:2 neuroscience:1 estimated:1 carnegie:2 group:2 iter:1 olle:2 threshold:1 prevent:1 pj:3 sum:5 year:3 weigend:6 almost:2 aile:1 decision:1 comparable:1 bit:3 layer:2 hi:2 ct:1 ope:1 larg:1 strength:1 constraint:2 uno:1 lineal:1 relatively:2 department:1 waibel:1 combination:1 verage:1 conjugate:2 lld:1 wi:4 cun:2 trj:1 invariant:1 pr:1 erm:3 equation:2 ihe:1 i0t:2 fed:1 available:1 gaussians:5 autoregression:1 away:2 appropriate:2 hat:3 cent:1 neuml:1 remaining:1 ensure:1 vrh:1 yearly:3 archit:1 classical:2 dat:2 society:1 fa:2 strategy:1 usual:2 countered:1 exhibit:1 gradient:4 ow:1 lou:1 predi:1 tistical:1 me:1 assuming:1 gel:1 providing:1 pmc:1 difficult:2 ilw:1 negative:1 ba:1 design:1 perform:1 benchmark:1 howard:1 descent:1 trb:2 hinton:11 communication:1 precise:1 discovered:2 lb:1 canada:1 california:1 narrow:3 foj:1 bar:1 below:1 pattern:4 royal:1 critical:1 ual:1 force:1 predicting:1 advanced:1 improve:2 technology:1 inversely:1 created:1 rela:1 auto:1 prior:3 literature:1 acknowledgement:1 relative:3 allo:1 proportional:2 geoffrey:1 versus:1 age:2 validation:3 degree:1 rni:1 morl:1 share:5 summary:1 repeat:1 free:1 allow:3 institute:3 szeliski:1 differentiating:1 valid:1 adaptive:5 san:1 excess:1 uni:1 decides:1 pittsburgh:2 noranda:1 ctions:1 why:2 table:5 terminate:1 ca:2 investigated:2 cl:1 yi2:1 erion:1 rh:2 crgtr:1 sunspot:4 salk:1 tong:3 ose:1 shrinking:1 ny:1 wish:1 explicit:1 col:1 lie:1 pe:1 tied:1 lw:2 weighting:1 aver:1 learns:3 removing:1 tlte:1 offset:1 decay:5 evidence:1 circularly:1 importance:1 phd:2 magnitude:2 te:1 justifies:1 anu:1 lt:1 simply:2 explore:1 contained:1 nserc:1 cyclical:1 ma:1 prop:2 stati:1 conditional:1 lilli:1 intuit:1 rbf:3 towards:2 labelled:2 hard:1 typical:2 huberman:7 sene:1 called:1 e:2 la:1 est:2 ew:1 evaluate:1 |
4,397 | 4,980 | Streaming Variational Bayes
Tamara Broderick,
Nicholas Boyd, Andre Wibisono, Ashia C. Wilson
University of California, Berkeley
{tab@stat, nickboyd@eecs, wibisono@eecs, ashia@stat}.berkeley.edu
Michael I. Jordan
University of California, Berkeley
[email protected]
Abstract
We present SDA-Bayes, a framework for (S)treaming, (D)istributed,
(A)synchronous computation of a Bayesian posterior. The framework makes
streaming updates to the estimated posterior according to a user-specified approximation batch primitive. We demonstrate the usefulness of our framework, with
variational Bayes (VB) as the primitive, by fitting the latent Dirichlet allocation
model to two large-scale document collections. We demonstrate the advantages
of our algorithm over stochastic variational inference (SVI) by comparing the two
after a single pass through a known amount of data?a case where SVI may be
applied?and in the streaming setting, where SVI does not apply.
1
Introduction
Large, streaming data sets are increasingly the norm in science and technology. Simple descriptive
statistics can often be readily computed with a constant number of operations for each data point in
the streaming setting, without the need to revisit past data or have advance knowledge of future data.
But these time and memory restrictions are not generally available for the complex, hierarchical
models that practitioners often have in mind when they collect large data sets. Significant progress
on scalable learning procedures has been made in recent years [e.g., 1, 2]. But the underlying
models remain simple, and the inferential framework is generally non-Bayesian. The advantages
of the Bayesian paradigm (e.g., hierarchical modeling, coherent treatment of uncertainty) currently
seem out of reach in the Big Data setting.
An exception to this statement is provided by [3?5], who have shown that a class of approximation methods known as variational Bayes (VB) [6] can be usefully deployed for large-scale data
sets. They have applied their approach, referred to as stochastic variational inference (SVI), to the
domain of topic modeling of document collections, an area with a major need for scalable inference algorithms. VB traditionally uses the variational lower bound on the marginal likelihood as an
objective function, and the idea of SVI is to apply a variant of stochastic gradient descent to this
objective. Notably, this objective is based on the conceptual existence of a full data set involving D
data points (i.e., documents in the topic model setting), for a fixed value of D. Although the stochastic gradient is computed for a single, small subset of data points (documents) at a time, the posterior
being targeted is a posterior for D data points. This value of D must be specified in advance and is
used by the algorithm at each step. Posteriors for D! data points, for D! != D, are not obtained as
part of the analysis.
We view this lack of a link between the number of documents that have been processed thus far
and the posterior that is being targeted as undesirable in many settings involving streaming data.
In this paper we aim at an approximate Bayesian inference algorithm that is scalable like SVI but
1
is also truly a streaming procedure, in that it yields an approximate posterior for each processed
collection of D! data points?and not just a pre-specified ?final? number of data points D. To that
end, we return to the classical perspective of Bayesian updating, where the recursive application
of Bayes theorem provides a sequence of posteriors, not a sequence of approximations to a fixed
posterior. To this classical recursive perspective we bring the VB framework; our updates need
not be exact Bayesian updates but rather may be approximations such as VB. This approach is
similar in spirit to assumed density filtering or expectation propagation [7?9], but each step of those
methods involves a moment-matching step that can be computationally costly for models such as
topic models. We are able to avoid the moment-matching step via the use of VB. We also note other
related work in this general vein: MCMC approximations have been explored by [10], and VB or
VB-like approximations have also been explored by [11, 12].
Although the empirical success of SVI is the main motivation for our work, we are also motivated by
recent developments in computer architectures, which permit distributed and asynchronous computations in addition to streaming computations. As we will show, a streaming VB algorithm naturally
lends itself to distributed and asynchronous implementations.
2
Streaming, distributed, asynchronous Bayesian updating
Streaming Bayesian updating. Consider data x1 , x2 , . . . generated iid according to a distribution
p(x | ?) given parameter(s) ?. Assume that a prior p(?) has also been specified. Then Bayes theorem gives us the posterior distribution of ? given a collection of S data points, C1 := (x1 , . . . , xS ):
p(? | C1 ) = p(C1 )?1 p(C1 | ?) p(?),
!S
where p(C1 | ?) = p(x1 , . . . , xS | ?) = s=1 p(xs | ?).
Suppose we have seen and processed b?1 collections, sometimes called minibatches, of data. Given
the posterior p(? | C1 , . . . , Cb?1 ), we can calculate the posterior after the bth minibatch:
p(? | C1 , . . . , Cb ) ? p(Cb | ?) p(? | C1 , . . . , Cb?1 ).
(1)
That is, we treat the posterior after b ? 1 minibatches as the new prior for the incoming data points.
If we can save the posterior from b ? 1 minibatches and calculate the normalizing constant for the
bth posterior, repeated application of Eq. (1) is streaming; it automatically gives us the new posterior
without needing to revisit old data points.
In complex models, it is often infeasible to calculate the posterior exactly, and an approximation
must be used. Suppose that, given a prior p(?) and data minibatch C, we have an approximation
algorithm A that calculates an approximate posterior q: q(?) = A(C, p(?)). Then, setting q0 (?) =
p(?), one way to recursively calculate an approximation to the posterior is
p(? | C1 , . . . , Cb ) ? qb (?) = A (Cb , qb?1 (?)) .
(2)
When A yields the posterior from Bayes theorem, this calculation is exact. This approach already
differs from that of [3?5], which we will see (Sec. 3.2) directly approximates p(? | C1 , . . . , CB )
for fixed B without making intermediate approximations for b strictly between 1 and B.
Distributed Bayesian updating. The sequential updates in Eq. (2) handle streaming data in theory,
but in practice, the A calculation might take longer than the time interval between minibatch arrivals
or simply take longer than desired. Parallelizing computations increases algorithm throughput. And
posterior calculations need not be sequential. Indeed, Bayes theorem yields
"B
$
"B
$
#
#
?1
p(? | C1 , . . . , CB ) ?
p(Cb | ?) p(?) ?
p(? | Cb ) p(?)
p(?).
(3)
b=1
b=1
That is, we can calculate the individual minibatch posteriors p(? | Cb ), perhaps in parallel, and then
combine them to find the full posterior p(? | C1 , . . . , CB ).
Given an approximating algorithm A as above, the corresponding approximate update would be
"B
$
#
p(? | C1 , . . . , CB ) ? q(?) ?
A(Cb , p(?)) p(?)?1 p(?),
(4)
b=1
2
for some approximating distribution q, provided the normalizing constant for the right-hand side of
Eq. (4) can be computed.
Variational inference methods are generally based on exponential family representations [6], and we
will make that assumption here. In particular, we suppose p(?) ? exp{?0 ? T (?)}; that is, p(?) is
an exponential family distribution for ? with sufficient statistic T (?) and natural parameter ?0 . We
suppose further that A always returns a distribution in the same exponential family; in particular, we
suppose that there exists some parameter ?b such that
qb (?) ? exp{?b ? T (?)}
for
qb (?) = A(Cb , p(?)).
When we make these two assumptions, the update in Eq. (4) becomes
%"
$
'
B
&
p(? | C1 , . . . , CB ) ? q(?) ? exp
?0 +
(?b ? ?0 ) ? T (?) ,
(5)
(6)
b=1
where the normalizing constant is readily obtained from the exponential family form. In what follows we use the shorthand ? ? A(C, ?0 ) to denote that A takes as input a minibatch C and a prior
with exponential family parameter ?0 and that it returns a distribution in the same exponential family
with parameter ?.
So, to approximate p(? | C1 , . . . , CB ), we first calculate ?b via the approximation primitive A for
each minibatch Cb ; note that these calculations may be performed in parallel. Then we sum together
the quantities ?b ? ?0 across b, along with the initial ?0 from the prior, to find the final exponential
family parameter to the full posterior approximation q. We previously saw that the general Bayes
sequential update can be made streaming by iterating with the old posterior as the new prior (Eq. (2)).
Similarly, here we see that the full posterior approximation q is in the same exponential family as
the prior, so one may iterate these parallel computations to arrive at a parallelized algorithm for
streaming posterior computation.
We emphasize that while these updates are reminiscent of prior-posterior conjugacy, it is actually
the approximate posteriors and single, original prior that we assume belong to the same exponential
family. It is not necessary to assume any conjugacy in the generative model itself nor that any true
intermediate or final posterior take any particular limited form.
Asynchronous Bayesian updating. Performing B computations in parallel can in theory speed up
algorithm running time by a factor of B, but in practice it is often the case that a single computation
thread takes longer than the rest. Waiting for this thread to finish diminishes potential gains from
distributing the computations. This problem can be ameliorated by making computations asynchronous. In this case, processors known as workers each solve a subproblem. When a worker
finishes, it reports its solution to a single master processor. If the master gives the worker a new
subproblem without waiting for the other workers to finish, it can decrease downtime in the system.
Our asynchronous algorithm is in the spirit of Hogwild! [1]. To present the algorithm we first
describe an asynchronous computation that we will not use in practice, but which will serve as a
conceptual stepping stone. Note in particular that the following scheme makes the computations
in Eq. (6) asynchronous. Have each worker continuously iterate between three steps: (1) collect
a new minibatch C, (2) compute the local approximate posterior ? ? A(C, ?0 ), and (3) return
?? := ? ? ?0 to the master. The master, in turn, starts by assigning the posterior to equal the prior:
? (post) ? ?0 . Each time the master receives a quantity ?? from any worker, it updates the posterior
synchronously: ? (post) ? ? (post) + ??. If A returns the exponential family parameter of the true
posterior (rather than an approximation), then the posterior at the master is exact by Eq. (4).
A preferred asynchronous computation works as follows. The master initializes its posterior estimate
to the prior: ? (post) ? ?0 . Each worker continuously iterates between four steps: (1) collect a new
minibatch C, (2) copy the master posterior value locally ? (local) ? ? (post) , (3) compute the local
approximate posterior ? ? A(C, ? (local) ), and (4) return ?? := ? ? ? (local) to the master. Each
time the master receives a quantity ?? from any worker, it updates the posterior synchronously:
? (post) ? ? (post) + ??.
The key difference between the first and second frameworks proposed above is that, in the second,
the latest posterior is used as a prior. This latter framework is more in line with the streaming update
of Eq. (2) but introduces a new layer of approximation. Since ? (post) might change at the master
3
while the worker is computing ??, it is no longer the case that the posterior at the master is exact
when A returns the exponential family parameter of the true posterior. Nonetheless we find that the
latter framework performs better in practice, so we focus on it exclusively in what follows.
We refer to our overall framework as SDA-Bayes, which stands for (S)treaming, (D)istributed,
(A)synchronous Bayes. The framework is intended to be general enough to allow a variety of local
approximations A. Indeed, SDA-Bayes works out of the box once an implementation of A?and a
prior on the global parameter(s) ??is provided. In the current paper our preferred local approximation will be VB.
3
Case study: latent Dirichlet allocation
In what follows, we consider examples of the choices for the ? prior and primitive A in the context
of latent Dirichlet allocation (LDA) [13]. LDA models the content of D documents in a corpus.
Themes potentially shared by multiple documents are described by topics. The unsupervised learning problem is to learn the topics as well as discover which topics occur in which documents.
More formally, each topic (of K total topics) is a distribution over the V words in the vocabulary:
?k = (?kv )Vv=1 . Each document is an admixture of topics. The words in document d are assumed
to be exchangeable. Each word wdn belongs to a latent topic zdn chosen according to a documentspecific distribution of topics ?d = (?dk )K
k=1 . The full generative model, with Dirichlet priors for
?k and ?d conditioned on respective parameters ?k and ?, appears in [13].
To see that this model fits our specification in Sec. 2, consider the set of global parameters ? =
d
?. Each document wd = (wdn )N
n=1 is distributed iid conditioned on the global topics. The full
collection of data is a corpus C = w = (wd )D
d=1 of documents. The posterior for LDA, p(?, ?, z |
C, ?, ?), is equal to the following expression up to proportionality:
"K
$ "D
$ "D N
$
d
#
#
##
?
Dirichlet(?k | ?k ) ?
Dirichlet(?d | ?) ?
?dzdn ?zdn ,wdn .
(7)
k=1
d=1 n=1
d=1
The posterior for just the global parameters p(? | C, ?, ?) can be obtained from p(?, ?, z | C, ?, ?) by
integrating out the local, document-specific parameters ?, z. As is common in complex models, the
normalizing constant for Eq. (7) is intractable to compute, so the posterior must be approximated.
3.1
Posterior-approximation algorithms
To apply SDA-Bayes to LDA, we use the prior specified by the generative model. It remains to
choose a posterior-approximation algorithm A. We consider two possibilities here: variational
Bayes (VB) and expectation propagation (EP). Both primitives take Dirichlet distributions as priors
for ? and both return Dirichlet distributions for the approximate posterior of the topic parameters ?;
thus the prior and approximate posterior are in the same exponential family. Hence both VB and EP
can be utilized as a choice for A in the SDA-Bayes framework.
Mean-field variational Bayes. We use the shorthand pD for Eq. (7), the posterior given D documents. We assume the approximating distribution, written qD for shorthand, takes the form
"K
$ "D
$ "D N
$
d
#
#
##
qD (?, ?, z | ?, ?, ?) =
qD (?k | ?k ) ?
qD (?d | ?d ) ?
qD (zdn | ?dwdn ) (8)
k=1
d=1
d=1 n=1
for parameters (?kv ), (?dk ), (?dvk ) with k ? {1, . . . , K}, v ? {1, . . . , V }, d ? {1, . . . , D}.
Moreover, we set qD (?k | ?k ) = DirichletV (?k | ?k ), qD (?d | ?d ) = DirichletK (?d | ?d ), and
qD (zdn | ?dwdn ) = CategoricalK (zdn | ?dwdn ). The subscripts on Dirichlet and Categorical indicate
the dimensions of the distributions (and of the parameters).
The problem of VB is to find the best approximating qD , defined as the collection of variational
parameters ?, ?, ? that minimize the KL divergence from the true posterior: KL (qD ' pD ). Even
finding the minimizing parameters is a difficult optimization problem. Typically the solution is
approximated by coordinate descent in each parameter [6, 13] as in Alg. 1. The derivation of VB for
LDA can be found in [4, 13] and Sup. Mat. A.1.
4
Algorithm 1: VB for LDA
Subroutine LocalVB(d, ?)
Output: (?d , ?d )
Initialize ?d
while (?d , ?d ) not converged do
?(k, v), set ?dvk ?
exp (Eq [log ?dk ] + Eq [log ?kv ])
(normalized across k)
(V
?k, ?dk ? ?k + v=1 ?dvk ndv
Input: Data (nd )D
d=1 ; hyperparameters ?, ?
Output: ?
Initialize ?
while (?, ?, ?) not converged do
for d = 1, . . . , D do
(?d , ?d ) ? LocalVB(d, ?)
(D
?(k, v), ?kv ? ?kv + d=1 ?dvk ndv
Algorithm 2: SVI for LDA
Algorithm 3: SSU for LDA
Input: Hyperparameters ?, ?
Output: A sequence ?(1) , ?(2) , . . .
(0)
Initialize ?(k, v), ?kv ? ?kv
for b = 1, 2, . . . do
Collect new data minibatch C
foreach document indexed d in C do
(?d , ?d ) ? LocalVB(d, ?)
(
(b)
(b?1)
?(k, v), ?kv ? ?kv + d in C ?dvk ndv
?, ?, D, (?t )Tt=1
Input: Hyperparameters
Output: ?
Initialize ?
for t = 1, . . . , T do
Collect new data minibatch C
foreach document indexed d in C do
(?d , ?d ) ? LocalVB(d, ?)
? kv ? ?kv + D (
?(k, v), ?
?dvk ndv
|C|
d in C
? kv
?(k, v), ?kv ? (1 ? ?t )?kv + ?t ?
Figure 1: Algorithms for calculating ?, the parameters for the topic posteriors in LDA. VB iterates multiple times through the data, SVI makes a single pass, and SSU is streaming. Here, ndv
represents the number of words v in document d.
Expectation propagation. An EP [7] algorithm for approximating the LDA posterior appears in
Alg. 6 of Sup. Mat. B. Alg. 6 differs from [14], which does not provide an approximate posterior for
the topic parameters, and is instead our own derivation. Our version of EP, like VB, learns factorized
Dirichlet distributions over topics.
3.2
Other single-pass algorithms for approximate LDA posteriors
The algorithms in Sec. 3.1 pass through the data multiple times and require storing the data set in
memory?but are useful as primitives for SDA-Bayes in the context of the processing of minibatches
of data. Next, we consider two algorithms that can pass through a data set just one time (single pass)
and to which we compare in the evaluations (Sec. 4).
Stochastic variational inference. VB uses coordinate descent to find a value of qD , Eq. (8), that
locally minimizes the KL divergence, KL (qD ' pD ). Stochastic variational inference (SVI) [3, 4]
is exactly the application of a particular version of stochastic gradient descent to the same optimization problem. While stochastic gradient descent can often be viewed as a streaming algorithm, the
optimization problem itself here depends on D via pD , the posterior on D data points. We see that,
as a result, D must be specified in advance, appears in each step of SVI (see Alg. 2), and is independent of the number of data points actually processed by the algorithm. Nonetheless, while one may
choose to visit D! != D data points or revisit data points when using SVI to estimate pD [3, 4], SVI
can be made single-pass by visiting each of D data points exactly once and then has constant memory requirements. We also note that two new parameters, ?0 > 0 and ? ? (0.5, 1], appear in SVI,
beyond those in VB, to determine a learning rate ?t as a function of iteration t: ?t := (?0 + t)?? .
Sufficient statistics. On each round of VB (Alg. 1), we update the local parameters for all doc(D
uments and then compute ?kv ? ?kv + d=1 ?dvk ndv . An alternative single-pass (and indeed
streaming) option would be to update the local parameters for each minibatch of documents as they
arrive and then add the corresponding terms ?dvk ndv to the current estimate of ? for each document
d in the minibatch. This essential idea has been proposed previously for models other than LDA by
[11, 12] and forms the basis of what we call the sufficient statistics update algorithm (SSU): Alg. 3.
This algorithm is equivalent to SDA-Bayes with A chosen to be a single iteration over the global
variable ? of VB (i.e., updating ? exactly once instead of iterating until convergence).
5
Wikipedia
Log pred prob
Time (hours)
Nature
32-SDA
1-SDA
SVI
SSU
32-SDA
1-SDA
SVI
SSU
?7.31
2.09
?7.43
43.93
?7.32
7.87
?7.91
8.28
?7.11
0.55
?7.19
10.02
?7.08
1.22
?7.82
1.27
Table 1: A comparison of (1) log predictive probability of held-out data and (2) running time of
four algorithms: SDA-Bayes with 32 threads, SDA-Bayes with 1 thread, SVI, and SSU.
4
Evaluation
We follow [4] (and further [15, 16]) in evaluating our algorithms by computing (approximate) predictive probability. Under this metric, a higher score is better, as a better model will assign a higher
probability to the held-out words.
We calculate predictive probability by first setting aside held-out testing documents C (test) from the
full corpus and then further setting aside a subset of held-out testing words Wd,test in each testing
document d. The remaining (training) documents C (train) are used to estimate the global parameter
posterior q(?), and the remaining (training) words Wd,train within the dth testing document are used
to estimate the document-specific parameter posterior q(?d ).1 To calculate predictive probability,
an approximation is necessary since we do not know the predictive distribution?just as we seek to
learn the posterior distribution. Specifically, we calculate the normalized predictive distribution and
report ?log predictive probability? as
(
(
(
(train)
(train)
, Wd,train )
, Wd,train )
d?C (test)
wtest ?Wd,test log p(wtest | C
d?C (test) log p(Wd,test | C
(
(
=
,
d?C (test) |Wd,test |
d?C (test) |Wd,test |
where we use the approximation
) )
p(wtest | C (train) , Wd,train ) =
?
?
?d
) )
?
?d
*
K
&
?dk ?kwtest
k=1
*
K
&
?dk ?kwtest
k=1
+
+
p(?d | Wd,train , ?) p(? | C (train) ) d?d d?
q(?d ) q(?) d?d d? =
K
&
Eq [?dk ] Eq [?kwtest ].
k=1
To facilitate comparison with SVI, we use the Wikipedia and Nature corpora of [3, 5] in our experiments. These two corpora represent a range of sizes (3,611,558 training documents for Wikipedia
and 351,525 for Nature) as well as different types of topics. We expect words in Wikipedia to represent an extremely broad range of topics whereas we expect words in Nature to focus more on the
sciences. We further use the vocabularies of [3, 5] and SVI code available online at [17]. We hold
out 10,000 Wikipedia documents and 1,024 Nature documents (not included in the counts above)
for testing. In the results presented in the main text, we follow [3, 4] in fitting an LDA model
with K = 100 topics and hyperparameters chosen as: ?k, ?k = 1/K, ?(k, v), ?kv = 0.01. For
both Wikipedia and Nature, we set the parameters in SVI according to the optimal values of the
parameters described in Table 1 of [3] (number of documents D correctly set in advance, step size
parameters ? = 0.5 and ?0 = 64).
Figs. 3(a) and 3(d) demonstrate that both SVI and SDA are sensitive to minibatch size when
?kv = 0.01, with generally superior performance at larger batch sizes. Interestingly, both SVI
and SDA performance improve and are steady across batch size when ?kv = 1 (Figs. 3(a) and 3(d)).
Nonetheless, we use ?kv = 0.01 in what follows in the interest of consistency with [3, 4]. Moreover,
in the remaining experiments, we use a large minibatch size of 215 = 32,768. This size is the largest
before SVI performance degrades in the Nature data set (Fig. 3(d)).
Performance and timing results are shown in Table 1. One would expect that with additional streaming capabilities, SDA-Bayes should show a performance loss relative to SVI. We see from Table 1
1
In all cases, we estimate q(?d ) for evaluative purposes using VB since direct EP estimation takes prohibitively long.
6
?7.45
1
2
4
8 16 32
number of threads
(a) Wikipedia
?7.15
?7.2
1
sync
async
40
30
20
10
0
2
4
8 16 32
number of threads
(b) Nature
1
2
4
8
16
number of threads
(c) Wikipedia
32
sync
async
10
run time (hours)
?7.4
sync
async
?7.1
run time (hours)
?7.35
log predictive probability
log predictive probability
sync
async
?7.3
8
6
4
2
0
1
2
4
8
16
number of threads
32
(d) Nature
Figure 2: SDA-Bayes log predictive probability (two left plots) and run time (two right plots) as a
function of number of threads.
that such loss is small in the single-thread case, while SSU performs much worse. SVI is faster than
single-thread SDA-Bayes in this single-pass setting.
Full SDA-Bayes improves run time with no performance cost. We handicap SDA-Bayes in the
above comparisons by utilizing just a single thread. In Table 1, we also report performance of SDABayes with 32 threads and the same minibatch size. In the synchronous case, we consider minibatch
size to equal the total number of data points processed per round; therefore, the minibatch size equals
the number of data points sent to each thread per round times the total number of threads. In the
asynchronous case, we analogously report minibatch size as this product.
Fig. 2 shows the performance of SDA-Bayes when we run with {1, 2, 4, 8, 16, 32} threads while
keeping the minibatch size constant. The goal in such a distributed context is to improve run time
while not hurting performance. Indeed, we see dramatic run time improvement as the number of
threads grows and in fact some slight performance improvement as well. We tried both a parallel version and a full distributed, asynchronous version of the algorithm; Fig. 2 indicates that the
speedup and performance improvements we see here come from parallelizing?which is theoretically justified by Eq. (3) when A is Bayes rule. Our experiments indicate that our Hogwild!-style
asynchrony does not hurt performance. In our experiments, the processing time at each thread seems
to be approximately equal across threads and dominate any communication time at the master, so
synchronous and asynchronous performance and running time are essentially identical. In general,
a practitioner might prefer asynchrony since it is more robust to node failures.
SVI is sensitive to the choice of total data size D. The evaluations above are for a single posterior
over D data points. Of greater concern to us in this work is the evaluation of algorithms in the
streaming setting. We have seen that SVI is designed to find the posterior for a particular, prechosen number of data points D. In practice, when we run SVI on the full data set but change the
input value of D in the algorithm, we can see degradations in performance. In particular, we try
values of D equal to {0.01, 0.1, 1, 10, 100} times the true D in Fig. 3(b) for the Wikipedia data set
and in Fig. 3(e) for the Nature data set.
A practitioner in the streaming setting will typically not know D in advance, or multiple values of
D may be of interest. Figs. 3(b) and 3(e) illustrate that an estimate may not be sufficient. Even in
the case where D is known in advance, it is reasonable to imagine a new influx of further data. One
might need to run SVI again from the start (and, in so doing, revisit the first data set) to obtain the
desired performance.
SVI is sensitive to learning step size. [3, 5] use cross-validation to tune step-size parameters
(?0 , ?) in the stochastic gradient descent component of the SVI algorithm. This cross-validation
requires multiple runs over the data and thus is not suited to the streaming setting. Figs. 3(c) and 3(f)
demonstrate that the parameter choice does indeed affect algorithm performance. In these figures,
we keep D at the true training data size.
[3] have observed that the optimal (?0 , ?) may interact with minibatch size, and we further observe
that the optimal values may vary with D as well. We also note that recent work has suggested a way
to update (?0 , ?) adaptively during an SVI run [18].
EP is not suited to LDA. Earlier attempts to apply EP to the LDA model in the non-streaming
setting have had mixed success, with [19] in particular finding that EP performance can be poor for
LDA and, moreover, that EP requires ?unrealistic intermediate storage requirements.? We found
7
SVI, ? = 1.0
SVI, ? = 0.01
SDA, ? = 1.0
SDA, ? = 0.01
?8.2
?8.5
5
10
15
log batch size (base 2)
?7.35
?7.4
?7.45
?7.5
0
D = 361155800
D = 36115580
D = 3611558
D = 361155
D = 36115
1e6
2e6
3e6
number of examples seen
?7.3
?7.6
?7.9
SDA, ? = 1.0
SDA, ? = 0.01
SVI, ? = 0.01
SVI, ? = 1.0
?8.2
?8.5
5
10
15
log batch size (base 2)
log predictive probability
log predictive probability
(a) Sensitivity to minibatch size on (b) SVI sensitivity to D
Wikipedia
Wikipedia
?7
?7
?7.2
?7.4
?7.6
D = 3515250
D = 35152500
D = 351525
D = 35152
D = 3515
?7.8
?8
0
log predictive probability
?7.9
?7.3
1e5
2e5
3e5
number of examples seen
?7.3
?7.35
?7.4
?7.45
?7.5
0
?0
?0
?0
?0
?0
?0
= 16, ? = 1.0
= 256, ? = 0.5
= 64, ? = 1.0
= 256, ? = 1.0
= 64, ? = 0.5
= 16, ? = 0.5
1e6
2e6
3e6
number of examples seen
on (c) SVI sensitivity to stepsize parameters on Wikipedia
log predictive probability
?7.6
log predictive probability
log predictive probability
?7
?7.3
?7
?7.2
?7.4
?7.6
?7.8
?8
0
?0
?0
?0
?0
?0
?0
= 16, ? = 0.5
= 64, ? = 0.5
= 256, ? = 1.0
= 16, ? = 1.0
= 64, ? = 1.0
= 256, ? = 0.5
1e5
2e5
3e5
number of examples seen
(d) Sensitivity to minibatch size on (e) SVI sensitivity to D on Nature (f) SVI sensitivity to stepsize paNature
rameters on Nature
Figure 3: Sensitivity of SVI and SDA-Bayes to some respective parameters. Legends have the same
top-to-bottom order as the rightmost curve points.
this to also be true in the streaming setting. We were not able to obtain competitive results with EP;
based on an 8-thread implementation of SDA-Bayes with an EP primitive2 , after over 91 hours on
Wikipedia (and 6.7?104 data points), log predictive probability had stabilized at around ?7.95 and,
after over 97 hours on Nature (and 9.7 ? 104 data points), log predictive probability had stabilized at
around ?8.02. Although SDA-Bayes with the EP primitive is not effective for LDA, it remains to be
seen whether this combination may be useful in other domains where EP is known to be effective.
5
Discussion
We have introduced SDA-Bayes, a framework for streaming, distributed, asynchronous computation of an approximate Bayesian posterior. Our framework makes streaming updates to the estimated posterior according to a user-specified approximation primitive. We have demonstrated the
usefulness of our framework, with variational Bayes as the primitive, by fitting the latent Dirichlet
allocation topic model to the Wikipedia and Nature corpora. We have demonstrated the advantages
of our algorithm over stochastic variational inference and the sufficient statistics update algorithm,
particularly with respect to the key issue of obtaining approximations to posterior probabilities based
on the number of documents seen thus far, not posterior probabilities for a fixed number of documents.
Acknowledgments
We thank M. Hoffman, C. Wang, and J. Paisley for discussions, code, and data and our reviewers
for helpful comments. TB is supported by the Berkeley Fellowship, NB by a Hertz Foundation
Fellowship, and ACW by the Chancellor?s Fellowship at UC Berkeley. This research is supported in
part by NSF award CCF-1139158, DARPA Award FA8750-12-2-0331, AMPLab sponsor donations,
and the ONR under grant number N00014-11-1-0688.
2
We chose 8 threads since any fewer was too slow to get results and anything larger created too high of a
memory demand on our system.
8
References
[1] F. Niu, B. Recht, C. R?e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic
gradient descent. In Neural Information Processing Systems, 2011.
[2] A. Kleiner, A. Talwalkar, P. Sarkar, and M. Jordan. The big data bootstrap. In International Conference
on Machine Learning, 2012.
[3] M. Hoffman, D. M. Blei, and F. Bach. Online learning for latent Dirichlet allocation. In Neural Information Processing Systems, volume 23, pages 856?864, 2010.
[4] M. Hoffman, D. M. Blei, J. Paisley, and C. Wang. Stochastic variational inference. Journal of Machine
Learning Research, 14:1303?1347.
[5] C. Wang, J. Paisley, and D. M. Blei. Online variational inference for the hierarchical Dirichlet process. In
Artificial Intelligence and Statistics, 2011.
[6] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[7] T. P. Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty in Artificial
Intelligence, pages 362?369. Morgan Kaufmann, 2001.
[8] T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts
Institute of Technology, 2001.
[9] M. Opper. A Bayesian approach to on-line learning.
[10] K. R Canini, L. Shi, and T. L Griffiths. Online inference of topics with latent Dirichlet allocation. In
Artificial Intelligence and Statistics, volume 5, 2009.
[11] A. Honkela and H. Valpola. On-line variational Bayesian learning. In International Symposium on Independent Component Analysis and Blind Signal Separation, pages 803?808, 2003.
[12] J. Luts, T. Broderick, and M. P. Wand. Real-time semiparametric regression. Journal of Computational
and Graphical Statistics, to appear. Preprint arXiv:1209.3550.
[13] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[14] T. Minka and J. Lafferty. Expectation-propagation for the generative aspect model. In Uncertainty in
Artificial Intelligence, pages 352?359. Morgan Kaufmann, 2002.
[15] Y. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent
Dirichlet allocation. In Neural Information Processing Systems, 2006.
[16] A. Asuncion, M. Welling, P. Smyth, and Y. Teh. On smoothing and inference for topic models. In
Uncertainty in Artificial Intelligence, 2009.
[17] M. Hoffman. Online inference for LDA (Python code) at
http://www.cs.princeton.edu/?blei/downloads/onlineldavb.tar, 2010.
[18] R. Ranganath, C. Wang, D. M. Blei, and E. P. Xing. An adaptive learning rate for stochastic variational
inference. In International Conference on Machine Learning, 2013.
[19] W. L. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In Uncertainty in Artificial
Intelligence.
[20] M. Seeger. Expectation propagation for exponential families. Technical report, University of California
at Berkeley, 2005.
9
| 4980 |@word version:4 norm:1 seems:1 nd:1 proportionality:1 seek:1 tried:1 dramatic:1 recursively:1 moment:2 initial:1 exclusively:1 score:1 document:30 interestingly:1 fa8750:1 rightmost:1 past:1 current:2 comparing:1 wd:12 assigning:1 must:4 readily:2 reminiscent:1 written:1 plot:2 designed:1 update:17 aside:2 generative:4 fewer:1 intelligence:6 blei:6 provides:1 iterates:2 node:1 along:1 direct:1 symposium:1 shorthand:3 fitting:3 combine:1 sync:4 theoretically:1 notably:1 indeed:5 nor:1 treaming:2 automatically:1 becomes:1 provided:3 discover:1 underlying:1 moreover:3 factorized:1 what:5 minimizes:1 finding:2 berkeley:7 usefully:1 exactly:4 prohibitively:1 exchangeable:1 grant:1 appear:2 before:1 local:10 treat:1 timing:1 jakulin:1 subscript:1 niu:1 approximately:1 might:4 chose:1 downloads:1 collect:5 limited:1 range:2 acknowledgment:1 testing:5 recursive:2 practice:5 differs:2 svi:41 bootstrap:1 procedure:2 area:1 empirical:1 downtime:1 boyd:1 inferential:1 pre:1 matching:2 word:9 integrating:1 griffith:1 get:1 undesirable:1 storage:1 context:3 nb:1 collapsed:1 applying:1 restriction:1 equivalent:1 www:1 demonstrated:2 reviewer:1 shi:1 primitive:9 latest:1 rule:1 utilizing:1 dominate:1 handle:1 traditionally:1 coordinate:2 hurt:1 imagine:1 suppose:5 user:2 exact:4 smyth:1 us:2 trend:1 approximated:2 particularly:1 updating:6 utilized:1 vein:1 ep:13 observed:1 subproblem:2 bottom:1 preprint:1 wang:4 calculate:9 decrease:1 pd:5 broderick:2 predictive:18 serve:1 basis:1 darpa:1 derivation:2 train:10 describe:1 effective:2 artificial:6 newman:1 larger:2 solve:1 statistic:8 itself:3 final:3 online:5 advantage:3 descriptive:1 sequence:3 wtest:3 product:1 kv:20 convergence:1 requirement:2 illustrate:1 donation:1 bth:2 stat:2 progress:1 eq:16 c:2 involves:1 indicate:2 come:1 qd:12 stochastic:13 require:1 assign:1 strictly:1 hold:1 around:2 wright:1 exp:4 cb:18 major:1 vary:1 purpose:1 estimation:1 diminishes:1 currently:1 saw:1 sensitive:3 largest:1 hoffman:4 always:1 aim:1 rather:2 avoid:1 tar:1 wilson:1 focus:2 improvement:3 likelihood:1 indicates:1 seeger:1 talwalkar:1 helpful:1 inference:18 streaming:27 typically:2 subroutine:1 overall:1 issue:1 development:1 smoothing:1 initialize:4 uc:1 marginal:1 equal:6 once:3 field:1 ng:1 identical:1 represents:1 broad:1 unsupervised:1 throughput:1 future:1 report:5 divergence:2 individual:1 intended:1 attempt:1 wdn:3 interest:2 possibility:1 evaluation:4 introduces:1 truly:1 held:4 worker:9 necessary:2 respective:2 indexed:2 old:2 desired:2 modeling:2 earlier:1 cost:1 subset:2 usefulness:2 prechosen:1 too:2 buntine:1 eec:2 adaptively:1 sda:29 density:1 recht:1 sensitivity:7 international:3 evaluative:1 michael:1 together:1 continuously:2 analogously:1 luts:1 again:1 thesis:1 choose:2 worse:1 style:1 return:8 potential:1 sec:4 depends:1 blind:1 performed:1 view:1 hogwild:3 try:1 tab:1 sup:2 doing:1 start:2 bayes:32 option:1 parallel:5 capability:1 competitive:1 asuncion:1 xing:1 minimize:1 kaufmann:2 who:1 yield:3 bayesian:16 iid:2 processor:2 converged:2 reach:1 andre:1 failure:1 nonetheless:3 tamara:1 minka:3 naturally:1 gain:1 treatment:1 massachusetts:1 knowledge:1 improves:1 actually:2 appears:3 higher:2 follow:2 box:1 just:5 until:1 honkela:1 hand:1 receives:2 lack:1 propagation:6 minibatch:22 dvk:8 lda:18 perhaps:1 asynchrony:2 grows:1 facilitate:1 normalized:2 true:7 ccf:1 hence:1 q0:1 round:3 during:1 steady:1 anything:1 stone:1 tt:1 demonstrate:4 performs:2 bring:1 variational:20 common:1 wikipedia:14 superior:1 stepping:1 foreach:2 volume:2 belong:1 slight:1 approximates:1 significant:1 refer:1 hurting:1 paisley:3 consistency:1 similarly:1 had:3 specification:1 longer:4 add:1 base:2 istributed:2 posterior:65 recent:3 own:1 perspective:2 belongs:1 n00014:1 onr:1 success:2 seen:8 morgan:2 additional:1 greater:1 parallelized:1 determine:1 paradigm:1 signal:1 full:10 multiple:5 needing:1 technical:1 faster:1 calculation:4 cross:2 long:1 bach:1 post:8 visit:1 award:2 sponsor:1 calculates:1 scalable:3 variant:1 involving:2 regression:1 essentially:1 expectation:6 metric:1 arxiv:1 iteration:2 sometimes:1 represent:2 c1:15 justified:1 addition:1 whereas:1 fellowship:3 semiparametric:1 interval:1 rest:1 comment:1 sent:1 legend:1 lafferty:1 spirit:2 seem:1 jordan:5 practitioner:3 call:1 intermediate:3 enough:1 iterate:2 variety:1 finish:3 fit:1 affect:1 architecture:1 idea:2 synchronous:4 thread:21 motivated:1 expression:1 whether:1 pca:1 distributing:1 generally:4 iterating:2 useful:2 tune:1 amount:1 locally:2 processed:5 http:1 nsf:1 stabilized:2 revisit:4 async:4 estimated:2 correctly:1 per:2 discrete:1 mat:2 waiting:2 key:2 four:2 year:1 sum:1 wand:1 run:11 prob:1 uncertainty:5 master:13 arrive:2 family:15 reasonable:1 separation:1 doc:1 prefer:1 vb:22 bound:1 layer:1 handicap:1 occur:1 x2:1 influx:1 aspect:1 speed:1 extremely:1 qb:4 performing:1 speedup:1 according:5 combination:1 poor:1 hertz:1 remain:1 across:4 increasingly:1 making:2 computationally:1 conjugacy:2 previously:2 remains:2 turn:1 count:1 mind:1 know:2 end:1 available:2 operation:1 permit:1 apply:4 observe:1 hierarchical:3 nicholas:1 stepsize:2 save:1 batch:5 alternative:1 existence:1 original:1 top:1 dirichlet:16 running:3 remaining:3 graphical:2 lock:1 calculating:1 approximating:5 classical:2 objective:3 initializes:1 already:1 quantity:3 degrades:1 costly:1 visiting:1 gradient:6 lends:1 link:1 thank:1 valpola:1 topic:22 code:3 minimizing:1 difficult:1 statement:1 potentially:1 ashia:2 implementation:3 teh:2 descent:7 canini:1 communication:1 ssu:7 synchronously:2 parallelizing:3 zdn:5 sarkar:1 pred:1 introduced:1 specified:7 kl:4 ndv:7 california:3 coherent:1 hour:5 able:2 beyond:1 dth:1 suggested:1 tb:1 memory:4 wainwright:1 unrealistic:1 natural:1 scheme:1 improve:2 technology:2 admixture:1 created:1 categorical:1 text:1 prior:18 python:1 chancellor:1 relative:1 loss:2 expect:3 mixed:1 rameters:1 allocation:8 filtering:1 validation:2 foundation:2 sufficient:5 storing:1 supported:2 free:1 asynchronous:13 infeasible:1 copy:1 keeping:1 side:1 allow:1 vv:1 documentspecific:1 institute:1 distributed:8 curve:1 dimension:1 vocabulary:2 stand:1 evaluating:1 opper:1 collection:7 made:3 adaptive:1 far:2 welling:2 ranganath:1 approximate:16 emphasize:1 ameliorated:1 preferred:2 keep:1 global:6 incoming:1 conceptual:2 corpus:6 assumed:2 amplab:1 latent:9 kleiner:1 table:5 learn:2 nature:14 robust:1 obtaining:1 alg:6 interact:1 e5:6 complex:3 domain:2 main:2 big:2 motivation:1 hyperparameters:4 arrival:1 repeated:1 x1:3 fig:9 referred:1 deployed:1 slow:1 theme:1 exponential:14 learns:1 theorem:4 specific:2 explored:2 x:3 dk:7 normalizing:4 concern:1 exists:1 intractable:1 essential:1 sequential:3 phd:1 conditioned:2 demand:1 suited:2 simply:1 minibatches:4 viewed:1 targeted:2 goal:1 shared:1 content:1 change:2 included:1 specifically:1 degradation:1 called:1 total:4 pas:9 exception:1 formally:1 e6:6 latter:2 wibisono:2 mcmc:1 princeton:1 |
4,398 | 4,981 | Scalable Inference for Logistic-Normal Topic Models
Jianfei Chen, Jun Zhu, Zi Wang, Xun Zheng and Bo Zhang
State Key Lab of Intelligent Tech. & Systems; Tsinghua National TNList Lab;
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
{chenjf10,wangzi10}@mails.tsinghua.edu.cn;
{dcszj,dcszb}@mail.tsinghua.edu.cn; [email protected]
Abstract
Logistic-normal topic models can effectively discover correlation structures
among latent topics. However, their inference remains a challenge because of the
non-conjugacy between the logistic-normal prior and multinomial topic mixing
proportions. Existing algorithms either make restricting mean-field assumptions
or are not scalable to large-scale applications. This paper presents a partially collapsed Gibbs sampling algorithm that approaches the provably correct distribution
by exploring the ideas of data augmentation. To improve time efficiency, we further present a parallel implementation that can deal with large-scale applications
and learn the correlation structures of thousands of topics from millions of documents. Extensive empirical results demonstrate the promise.
1
Introduction
In Bayesian models, though conjugate priors normally result in easier inference problems, nonconjugate priors could be more expressive in capturing desired model properties. One popular example is admixture topic models which have obtained much success in discovering latent semantic
structures from data. For the most popular latent Dirichlet allocation (LDA) [5], a Dirichlet distribution is used as the conjugate prior for multinomial mixing proportions. But a Dirichlet prior
is unable to model topic correlation, which is important for understanding/visualizing the semantic
structures of complex data, especially in large-scale applications. One elegant extension of LDA
is the logistic-normal topic models (aka correlated topic models, CTMs) [3], which use a logisticnormal prior to capture the correlation structures among topics effectively. Along this line, many
subsequent extensions have been developed, including dynamic topic models [4] that deal with time
series via a dynamic linear system on the Gaussian variables and infinite CTMs [11] that can resolve
the number of topics from data.
The modeling flexibility comes with computational cost. Although significant progress has been
made on developing scalable inference algorithms for LDA using either distributed [10, 16, 1] or online [7] learning methods, the inference of logistic-normal topic models still remains a challenge, due
to the non-conjugate priors. Existing algorithms on learning logistic-normal topic models mainly
rely on approximate techniques, e.g., variational inference with unwarranted mean-field assumptions [3]. Although variational methods have a deterministic objective to optimize and are usually
efficient, they could only achieve an approximate solution. If the mean-field assumptions are not
made appropriately, the approximation could be unsatisfactory. Furthermore, existing algorithms
can only deal with small corpora and learn a limited number of topics. It is important to develop
scalable algorithms in order to apply the models to large collections of documents, which are becoming increasingly common in both scientific and engineering fields.
To address the limitations listed above, we develop a scalable Gibbs sampling algorithm for logisticnormal topic models, without making any restricting assumptions on the posterior distribution. Technically, to deal with the non-conjugate logistic-normal prior, we introduce auxiliary Polya-Gamma
1
variables [13], following the statistical ideas of data augmentation [17, 18, 8]; and the augmented
posterior distribution leads to conditional distributions from which we can draw samples easily without accept/reject steps. Moreover, the auxiliary variables are locally associated with each individual
document, and this locality naturally allows us to develop a distributed sampler by splitting the documents into multiple subsets and allocating them to multiple machines. The global statistics can
be updated asynchronously without sacrificing the predictive ability on unseen testing documents.
We successfully apply the scalable inference algorithm to learning a correlation graph of thousands
of topics on large corpora with millions of documents. These results are the largest automatically
learned topic correlation structures to our knowledge.
2
Logistic-Normal Topic Models
Nd
Let W = {wd }D
d=1 be a set of documents, where wd = {wdn }n=1 denote the words appearing
in document d of length Nd . A hierarchical Bayesian topic model posits each document as an
admixture of K topics, where each topic ?k is a multinomial distribution over a V -word vocabulary.
For a logistic-normal topic model (e.g., CTM), the generating process of document d is:
k
e?d
?d ? N (?, ?), ?dk = P
K
j=1
, ?n ? {1, ? ? ? , Nd } : zdn ? Mult(?d ), wdn ? Mult(?zdn ),
j
e?d
where Mult(?) denotes the multinomial distribution; zdn is a K-binary vector with only one nonzero
element; and ?zdn denotes the topic selected by the non-zero entry of zdn . For Bayesian CTM, the
topics are samples drawn from a prior, e.g., ?k ? Dir(?), where Dir(?) is a Dirichlet distribution.
Note that for identifiability, normally we assume ?dK = 0.
Given a set of documents W, CTM infers the posterior distribution p(?, Z, ?|W) ?
p0 (?, Z, ?)p(W|Z, ?) by the Bayes? rule. This problem is generally hard because of the nonconjugacy between the normal prior and the logistic transformation function (can be seen as a likelihood model for ?). Existing approaches resort to variational approximate methods [3] with strict
factorization assumptions. To avoid mean-field assumptions and improve the inference accuracy,
below we present a partially collapsed Gibbs sampler, which is simple to implement and can be
naturally parallelized for large-scale applications.
3
Gibbs Sampling with Data Augmentation
We now present a block-wise Gibbs sampling algorithm for logistic-normal topic models. To
improve mixing rates, we first integrate out the Dirichlet variables ?, by exploring the conjugacy
between a Dirichlet prior and multinomial likelihood. Specifically, we can integrate out ? and
perform Gibbs sampling for the marginalized distribution:
p(?, Z|W) ? p(W|Z)
Nd
D Y
Y
?dzdn
d=1 n=1
N (?d |?, ?) ?
Nd
K
D
Y
?(Ck + ?) Y Y
k=1
?(?)
d=1 n=1
zdn
e?d
PK
j=1
e
j
?d
N (?d |?, ?),
where Ckt is the number of times topic k being assigned to the term t over the whole corpus; Ck =
{Ckt }Vt=1 ; and ?(x) =
3.1
Qdim(x)
?(xi )
i=1
Pdim(x)
xi )
i=1
?(
is a function defined with the Gamma function ?(?).
Sampling Topic Assignments
When the variables ? = {?d }D
d=1 are given, we draw samples from p(Z|?, W). In our Gibbs
sampler, this is done by iteratively drawing a sample for each word in each document. The local
conditional distribution is:
k
w
dn
Ck,?n
+ ?wdn
k
e?d ,(1)
PV
j
C
+
?
j
j=1 k,?n
j=1
k
k
p(zdn
= 1|Z?n , wdn , W?dn , ?) ? p(wdn |zdn
= 1, Z?n , W?dn )e?d ? PV
?
where C?,?n
indicates that term n is excluded from the corresponding document or topic.
3.2
Sampling Logistic-Normal Parameters
When the topic assignments Z are given, we draw samples from the posterior distribution
?d
Q D QN d
e zn
p(?|Z, W) ? d=1
N (?d |?, ?), which is a Bayesian logistic regression model
n=1 PK
?d
j=1
e
j
2
with Z as the multinomial observations. Though it is hard to draw samples directly due to nonconjugacy, we can leverage recent advances in data augmentation to solve this inference task efficiently, with analytical local conditionals for Gibbs sampling, as detailed below.
Specifically, we have the likelihood of ?observing? the topic assignments zd for document d 1 as
zdn
Q Nd
e? d
d
p(zd |?d ) = n=1
j . Following Homes & Held [8], the likelihood for ?k conditioned on
PK
?
j=1
e
d
?d?k is:
?(?dk |?d?k ) =
Nd
Y
n=1
k
e?d
1+e
?k
d
k
zdn
1
1+e
?dj
?k
d
k
1?zdn
k
=
k
(e?d )Cd
?k
d Nd
(1 + e )
,
P Nd
k
is the number of words assigned
where ?kd = ?dk ? ?dk ; ?dk = log( j6=k e ); and Cdk = n=1 zdn
to topic k in document d. Therefore, we have the conditional distribution
P
p(?dk |?d?k , Z, W) ? ?(?dk |?d?k )N (?dk |?kd , ?k2 ),
(2)
?1
?k
2
?1
where ?kd = ?k ? ??1
is the precision matrix of a
kk ?k?k (?d ? ??k ) and ?k = ?kk . ? = ?
Gaussian distribution.
k
This is a posterior distribution of a Bayesian logistic model with a Gaussian prior, where zdn
are
binary response variables. Due to the non-conjugacy between the normal prior and logistic likelihood, we do not have analytical form of this posterior distribution. Although standard Monte Carlo
methods (e.g., rejection sampling) can be applied, they normally require a good proposal distribution and may have the trouble to deal with accept/reject rates. Data augmentation techniques have
been developed, e.g., [8] presented a two layer data augmentation representation with logistic distributions and [9] applied another data augmentation with uniform variables and truncated Gaussian
distributions, which may involve sophisticated accept/reject strategies [14]. Below, we develop a
simple exact sampling method without a proposal distribution.
Our method is based on a new data augmentation representation, following the recent developments
in Bayesian logistic regression [13], which is a direct data augmentation scheme with only one layer
of auxiliary variables and does not need to tune in order to get optimal performance. Specifically,
for the above posterior inference problem, we can show the following lemma.
Lemma 1 (Scale Mixture Representation). The likelihood ?(?dk |?d?k ) can be expressed as
k
k
(e?d )Cd
?k
d Nd
(1 + e )
=
1
k k
2Nd
e?d ?d
Z
?
e?
k 2
?k
d (?d )
2
p(?kd |Nd , 0)d?kd ,
0
where ?kd = Cdk ? Nd /2 and p(?kd |Nd , 0) is the Polya-Gamma distribution PG(Nd , 0).
The lemma suggest that p(?dk |?d?k , Z, W) is a marginal distribution of the complete distribution
p(?dk , ?kd |?d?k , Z, W) ?
1
2Nd
?k (?k )2
exp ?kd ?kd ? d d
p(?kd |Nd , 0)N (?dk |?kd , ?k2 ).
2
Therefore, we can draw samples from the complete distribution. By discarding the augmented
variable ?kd , we get the samples of the posterior distribution p(?dk |?d?k , Z, W).
?k (? k )2
For ?dk : we have p(?dk |?d?k , Z, W, ?kd ) ? exp ?kd ?dk ? d 2 d N (?dk |?, ? 2 ) = N (?dk , (?dk )2 ),
where the posterior mean is ?dk = (?dk )2 (?k?2 ?kd + ?kd + ?kd ?dk ) and the variance is (?dk )2 = (?k?2 +
?kd )?1 . Therefore, we can easily draw a sample from a univariate Gaussian distribution.
For ?kd : the conditional distribution of the augmented variable is p(?kd |Z, W, ?) ? exp ?
k 2
?k
d (?d )
p(?kd |Nd , 0) = PG ?kd ; Nd , ?kd , which is again a Polya-Gamma distribution by using the
2
construction definition of the general PG(a, b) class through an exponential tilting of the PG(a, 0)
density [13]. To draw samples from the Polya-Gamma distribution, note that a naive implementation
of the sampling using the infinite sum-of-Gamma representation is not efficient and it also involves
a potentially inaccurate step of truncating the infinite sum. Here we adopt the exact method proposed in [13], which draws the samples through drawing Nd samples from PG(1, ?dk ). Since Nd is
normally large, we will develop a fast and effective approximation in the next section.
1
Due to the independence, we can treat documents separately.
3
4
x 10
10000
m=1
m=2
m=4
m=8
m=n (exact)
Frequency
2
1.5
1
6000
4000
2000
0.5
0
120
m=1
m=2
m=4
m=8
m=n (exact)
8000
Frequency
2.5
130
140
150
160
z ? PG(z ; m, ?)
170
180
0
?1
0
(a)
3.3
1
2
3
?kd ? P(?kd | ?d? k, Z, W )
(b)
Fully-Bayesian Models
Figure 3: (a) frequency of f (z) with z ?
PG(m, ?); and (b) frequency of samples
from ?dk ? p(?dk |?d?k , Z, W). Though z
is not from the exact distribution, the distribution of ?dk is very accurate. The parameters ?kd = ?4.19, Cdk = 19, Nd =
1155, ?kd = 0.40, ?d2 = 0.31, and ? = 5.35
are from a real distribution when training
on the NIPS data set.
We can treat ? and ? as random variables and perform fully-Bayesian inference, by using the
conjugate Normal-Inverse-Wishart prior, p0 (?, ?) = N IW(?0 , ?, ?, W ), that is
?|?, W ? IW(?; ?, W ?1 ), ?|?, ?0 , ? ? N (?; ?0 , ?/?),
where IW(?; ?, W ?1 ) =
|W |?/2
?M
2 2
?M ( ?
2 )|?|
?+M +1
2
exp(? 12 Tr(W ??1 )) is the inverse Wishart
distribution and (?0 , ?, ?, W ) are hyper-parameters. Then, the conditional distribution is
p(?, ?|?, Z, W) ? p0 (?, ?)
Y
p(?d |?, ?) = N IW(??0 , ?? , ?? , W ? ),
(3)
d
which is still a Normal-Inverse-Wishart distribution due to the conjugate property and the parameters
?D
?
D
? ?? = ? + D, ?? = ? + D and W ? = W + Q + ?+D
?,
?0 + ?+D
(?? ? ?0 )(?? ? ?0 )? ,
are ??0 = ?+D
P
P
1
? d ? ?)
? ?.
where ?? = D
d ?d is the empirical mean of the data and Q =
d (?d ? ?)(?
4
Parallel Implementation and Fast Approximate Sampling
The above Gibbs sampler can be naturally parallelized to extract large correlation graphs from millions of documents, due to the following observations:
First, both ?d and ?d are conditionally independent given ? and ?, which makes it natural to distribute documents over machines and infer local ?d and ?d . No communication is needed for this
sampling step. Second, the global variables ? and ? can be inferred and broadcast to every machine
after each iteration. As mentioned in Section 3.3, this involves: 1) computing N IW posterior parameters, and 2) sampling from Eq. 3. Notice that ?d contribute to the posterior parameters ??0 , W ?
through the simple summation operator, so that we can perform local summation on each machine,
followed by a global aggregation. Similarly, N IW sample can be drawn distributively, by computing sample covariance of x1 , ? ? ? , x?? , drawn from N (x|0, W ? ) distributively after broadcasting
W ? . Finally, the topic assignments zd are conditionally independent given the topic counts Ck . We
synchronize Ck globally by leveraging the recent advances on scalable inference of LDA [1, 16],
which implemented a general framework to synchronize such counts.
To further speed up the inference algorithm, we designed a fast approximate sampling method to
draw PG(n, ?) samples, reducing the time complexity from O(n) in [13] to O(1). Specifically,
Polson et al. [13] show how to efficiently generate PG(1, ?) random variates. Due
Pnto additive property of Polya-Gamma distribution, y ? PG(n, ?) if xi ? PG(1, ?) and y = i=1 xi . However,
this sampler can be slow when n is large. For our Gibbs sampler, n is the document length, often
around hundreds. Fortunately, an effective approximation can be developed to achieve constant time
sampling of PG. Since n is relatively large, the sum variable y should be almost normally distributed, according to the central limit theorem. Fig. 3(a) confirms this intuition. Consider another
PG variable z ? PG(m, ?). If both m and n are large, y and z should be both samples from normal
distribution. Hence,
y. Specifically,
p we can do a simple linear transformation of z to approximate
n
we have f (z) = V ar(y)/V ar(z)(z ? E[z]) + E[y], where E[y] = 2? tanh(?/2) from [12], and
V ar(z)
V ar(y)
= m
n since both y and z are sum of PG(1, ?) variates. It can be shown that f (z) and y
have the same mean and variance. In practice, we found that even when m = 1, the algorithm
still can draw good samples from p(?dk |?d?k , Z, W) (See Fig. 3(b)). Hence, we are able to speed up
the Polya-Gamma sampling process significantly by applying this approximation. More empirical
analysis can be found in the appendix.
4
Furthermore, we can perform sparsity-aware fast sampling [19] in the Gibbs sampler. Specifically,
wdn
Ck,?n
PV
j
j=1
j=1 Ck,?n +
k
PV
?j
e? d , B k =
?wdn
k
e?d , then Eq. (1) can be written as
P
P
k
p(zdn
= 1|Z?n , wdn , W?dn , ?) ? Ak + Bk . Let ZA = k Ak and ZB = k Bk . We can show
that the sampling of zdn can be done by sampling from Mult( ZAA ) or Mult( ZBB ), due to the fact:
let Ak =
PV
k
p(zdn
= 1|Z?n , wdn , W?dn , ?) =
j=1
j
Ck,?n
+
PV
j=1
?j
Bk
Ak
Bk
Ak
+
= (1 ? p)
+p
,
ZA + ZB
ZA + ZB
ZA
ZB
(4)
B
where p = ZAZ+Z
. Note that Eq. (4) is a marginalization with respect to an auxiliary binary
B
variable. Thus a sample of zdn can be drawn by flipping a coin with probability p being head. If
it is tail, we draw zdn from Mult( ZAA ); otherwise from Mult( ZBB ). The advantage is that we only
need to consider all non-zero entries of A to sample from Mult( ZAA ). In fact, A has few non-zero
entries due to the sparsity of the topic counts Ck . Thus, the time complexity would be reduced from
O(K) to O(s(K)), where s(K) is the average number of non-zero entries in Ck . In practice, Ck is
very sparse, hence s(K) ? K when K is large. To sample from Mult( ZBB ), we iterate over all K
potential assignments. But since p is typically small, O(K) time complexity is acceptable.
With the above techniques, the time complexity per document of the Gibbs sampler is O(Nd s(K))
for sampling zd , O(K 2 ) for computing (?kd , ?k2 ), and O(SK) for sampling ?d with Eq. (2),
where S is the number of sub-burn-in steps over sampling ?dk . Thus the overall time complexity
is O(Nd s(K) + K 2 + SK), which is higher than the O(Nd s(K)) complexity of LDA [1] when K
is large, indicating a cost for the enriched representation of CTM comparing to LDA.
5
Experiments
We now present qualitative and quantitative evaluation to demonstrate the efficacy and scalability of
the Gibbs sampler for CTM (denoted by gCTM). Experiments are conducted on a 40-node cluster,
where each node is equipped with two 6-core CPUs (2.93GHz). For all the experiments, if not
explicitly mentioned, we set the hyper-parameters as ? = 0.01, T = 350, S = 8, m = 1, ? = ? =
0.01D, ?0 = 0, and W = ?I, where T is the number of burn-in steps. We will use M to denote
the number of machines and P to denote the number of CPU cores. For baselines, we compare
with the variational CTM (vCTM) [3] and the state-of-the-art LDA implementation, Yahoo! LDA
(Y!LDA) [1]. In order to achieve fair comparison, for both vCTM and gCTM we select T such that
the models converge sufficiently, as we shall discuss later in Section 5.3.
Data Sets: Experiments are conducted on several benchmark data sets, including NIPS paper abstracts, 20Newsgroups, and NYTimes (New York Times) corpora from [2] and the Wikipedia corpus
from [20]. All the data sets are randomly split into training and testing sets. Following the settings
in [3], we partition each document in the testing set into an observed part and a held-out part.
5.1
Qualitative Evaluation
We first examine the correlation structure of 1,000 topics learned by CTM using our scalable sampler
on the NYTimes corpus with 285,000 documents. Since the entire correlation graph is too large, we
build a 3-layer hierarchy by clustering the learned topics, with their learned correlation strength
as the similarity measure. Fig. 4 shows a part of the hierarchy2 , where the subgraph A represents
the top layer with 10 clusters. The subgraphs B and C are two second layer clusters; and D and
E are two correlation subgraphs consisting of leaf nodes (i.e., learned topics). To represent their
semantic meanings, we present 4 most frequent words for each topic; and for each topic cluster,
we also show most frequent words by building a hyper-topic that aggregates all the included topics.
On the top layer, the font size of each word in a word cloud is proportional to its frequency in the
hyper-topic. Clearly, we can see that many topics have strong correlations and the structure is useful
to help humans understand/browse the large collection of topics. With 40 machines, our parallel
Gibbs sampler finishes the training in 2 hours, which means that we are able to process real world
corpus in considerable speed. More details on scalability will be provided below.
2
The entire correlation graph can be found on http://ml-thu.net/?scalable-ctm
5
denotes the number of topics a cluster contains.
113
113
B
47
82
31
130
4
5
A
6
12
6
314
248
22
27
D
41
C
48
42
17
13
4
E
12
17
6
7
4
12
7
5
4
3
3
3
Figure 4: A hierarchical visualization of the correlation graph with 1,000 topics learned from
285,000 articles of the NYTimes. A denotes the top-layer subgraph with 10 big clusters; B and
C denote two second-layer clusters; and D and E are two subgraphs with leaf nodes (i.e., topics).
We present most frequent words of each topic cluster. Edges denote a correlation (above some
threshold) and the distance between two nodes represents the strength of their correlation. The node
size of a cluster is determined by the number of topics included in that cluster.
6
1800
4500
4000
perplexity
2000
time (s)
perplexity
10
vCTM
gCTM (M=1, P=1)
gCTM (M=1, P=12)
2
10
vCTM
gCTM (M=1, P=1)
gCTM (M=1, P=12)
1600
1400
0
20
40
60
K
80 100
(a)
10
20
40
60
K
80
6
10
gCTM (M=1, P=12)
gCTM (M=40, P=480)
Y!LDA (M=40, P=480)
4
time (s)
4
2200
3500
10
2
10
3000
2500
100
(b)
0
200 400 600 800 1000
K
(c)
10
gCTM (M=1, P=12)
gCTM (M=40, P=480)
Y!LDA (M=40, P=480)
200 400 600 800 1000
K
(d)
Figure 5: (a)(b): Perplexity and training time of vCTM, single-core gCTM, and multi-core gCTM
on the NIPS data set; (c)(d): Perplexity and training time of single-machine gCTM, multi-machine
gCTM, and multi-machine Y!LDA on the NYTimes data set.
5.2
Performance
We begin with an empirical assessment on the small NIPS data set, whose training set contains
1.2K documents. Fig. 5(a)&(b) show the performance of three single-machine methods: vCTM
(M = 1, P = 1), sequential gCTM (M = 1, P = 1), and parallel gCTM (M = 1, P = 12).
Fig. 5(a) shows that both versions of gCTM produce similar or better perplexity, compared to vCTM.
Moreover, Fig. 5(b) shows that when K is large, the advantage of gCTM becomes salient, e.g.,
sequential gCTM is about 7.5 times faster than vCTM; and multi-core gCTM achieves almost two
orders of magnitude of speed-up compared to vCTM.
data set
D
K
vCTM gCTM
In Table 1, we compare the efficiency
NIPS
1.2K
100
1.9 hr 8.9 min
of vCTM and gCTM on different sized
20NG
11K
200
16 hr
9 min
data sets. It can be observed that vCTM
N/A*
0.5 hr
NYTimes 285K 400
immediately becomes impractical when
17 hr
Wiki
6M
1000 N/A*
the data size reaches 285K, while by uti*not finished within 1 week.
lizing additional computing resources,
gCTM is able to process larger data sets Table 1: Training time of vCTM and gCTM (M = 40)
with considerable speed, making it ap- on various datasets.
plicable to real world problems. Note
that gCTM has almost the same training time on NIPS and 20Newsgroups data sets, due to their
small sizes. In such cases, the algorithm is dominated by synchronization rather than computation.
Fig. 5(c)&(d) show the results on the NYTimes corpus, which contains over 285K training documents and cannot be handled well by non-parallel methods. Therefore we concentrate on three parallel methods ? single-machine gCTM (M = 1, P = 12), multi-machine gCTM (M = 40, P =
480), and multi-machine Y!LDA (M = 40, P = 480). We can see that: 1) both versions of gCTM
obtain comparable perplexity to Y!LDA; and 2) gCTM (M = 40) is over an order of magnitude
faster than the single-machine method, achieving considerable speed-up with additional computing
resources. These observations suggest that gCTM is able to handle large data sets without sacrificing
the quality of inference. Also note that Y!LDA is faster than gCTM because of the model difference ? LDA does not learn correlation structure among topics. As analyzed in Section 4, the time
complexity of gCTM is O(K 2 + SK + Nd s(K)) per document, while for LDA it is O(Nd s(K)).
5.3
Sensitivity
Burn-In and Sub-Burn-In: Fig. 6(a)&(b) show the effect of burn-in steps and sub-burn-in steps on
the NIPS data set with K = 100. We also include vCTM for comparison. For vCTM, T denotes
the number of iteration of its EM loop in variational context. Our main observations are twofold:
1) despite various S, all versions of gCTMs reach a similar level of perplexity that is better than
vCTM; and 2) a moderate number of sub-iterations, e.g. S = 8, leads to the fastest convergence.
This experiment also provides insights on determining the number of outer iterations T that assures
convergence for both models. We adopt Cauchy?s criterion [15] for convergence: given an ? > 0, an
algorithm converges at iteration T if ?i, j ? T, |Perp i ? Perp j | < ?, where Perp i and Perp j are
perplexity at iteration i and j respectively. In practice, we set ? = 20 and run experiments with very
large number of iterations. As a result, we obtained T = 350 for gCTM and T = 8 for vCTM, as
pointed out with corresponing verticle line segments in Fig. 6(a)&(b).
7
2500
2000
2000
gCTM (S=1)
gCTM (S=2)
gCTM (S=4)
gCTM (S=8)
gCTM (S=16)
gCTM (S=32)
vCTM
2000
K=50
K=100
1800
perplexity
gCTM (S=1)
gCTM (S=2)
gCTM (S=4)
gCTM (S=8)
gCTM (S=16)
gCTM (S=32)
vCTM
perplexity
perplexity
2500
1600
1400
1200
1500 0
10
1
2
10
10
3
10
1500 1
10
T
(a)
2
10
3
10
time (s)
4
5
10
10
(b)
1000
1
4
16
64
256
1024
a
(c)
Figure 6: Sensitivity analysis with respect to key hyper-parameters: (a) perplexity at each iteration
with different S; (b) convergence speed with different S; (c) perplexity tested with different prior.
Prior: Fig. 6(c) shows perplexity under different prior settings. To avoid expensive search in a huge
space, we set (?0 , ?, W, ?) = (0, a, aI, a) to test the effect of N IW prior, where a larger a implies
more pseudo-observations of ? = 0, ? = I. We can see that for both K = 50 and K = 100, the
perplexity is invariant under a wide range of prior settings. This suggests that gCTM is insensitive
4
to prior values.
x 10
5.4
15
Scalability
Fixed M=8
Linearly scaling M
Ideal
time (s)
Fig. 7 shows the scalability of gCTM on the large
10
Wikipedia data set with K = 500. A practical problem
in real world machine learning is that when computing
resources are limitted, as the data size grows, the run5
ning time soon upsurges to an untolerable level. Ideally,
this problem can be solved by adding the same ratio
1.2M
2.4M
3.6M
4.8M
6M
of computing nodes. Our experiment demonstrates that
#docs
gCTM performs well in this scenario ? as we pour in
the same proportion of data and machines, the training Figure 7: Scalability analysis. We set
time is almost kept constant. In fact, the largest differ- M = 8, 16, 24, 32, 40 so that each maence from ideal curve is about 1,000 seconds, which is chine processes 150K documents.
almost unobservable in the figure. This suggests that
parallel gCTM enjoys nice scalability.
6
Conclusions and Discussions
We present a scalable Gibbs sampling algorithm for logistic-normal topic models. Our method
builds on a novel data augmentation formulation and addresses the non-conjugacy without making
strict mean-field assumptions. The algorithm is naturally parallelizable and can be further boosted
by approximate sampling techniques. Empirical results demonstrate significant improvement in time
efficiency over existing variational methods, with slightly better perplexity. Our method enjoys good
scalability, suggesting the ability to extract large structures from massive data.
In the future, we plan to study the performance of Gibbs CTM on industry level clusters with thousands of machines. We are also interested in developing scalable sampling algorithms of other
logistic-normal topic models, e.g., infinite CTM and dynamic topic models. Finally, the fast sampler of Poly-Gamma distributions can be used in relational and supervised topic models [6, 21].
Acknowledgments
This work is supported by the National Basic Research Program (973 Program) of China (Nos.
2013CB329403, 2012CB316301), National Natural Science Foundation of China (Nos. 61322308,
61305066), Tsinghua University Initiative Scientific Research Program (No. 20121088071), and
Tsinghua National Laboratory for Information Science and Technology, China.
8
References
[1] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. Smola. Scalable inference in
latent variable models. In International Conference on Web Search and Data Mining (WSDM),
2012.
[2] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[3] D. Blei and J. Lafferty. Correlated topic models. In Advances in Neural Information Processing
Systems (NIPS), 2006.
[4] D. Blei and J. Lafferty. Dynamic topic models. In International Conference on Machine
Learning (ICML), 2006.
[5] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[6] N. Chen, J. Zhu, F. Xia, and B. Zhang. Generalized relational topic models with data augmentation. In International Joint Conference on Artificial Intelligence (IJCAI), 2013.
[7] M. Hoffman, D. Blei, and F. Bach. Online learning for latent Dirichlet allocation. In Advances
in Neural Information Processing Systems (NIPS), 2010.
[8] C. Holmes and L. Held. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1(1):145?168, 2006.
[9] D. Mimno, H. Wallach, and A. McCallum. Gibbs sampling for logistic normal topic models
with graph-based priors. In NIPS Workshop on Analyzing Graphs, 2008.
[10] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed algorithms for topic models.
Journal of Machine Learning Research, (10):1801?1828, 2009.
[11] J. Paisley, C. Wang, and D. Blei. The discrete infinite logistic normal distribution for mixedmembership modeling. In International Conference on Artificial Intelligence and Statistics
(AISTATS), 2011.
[12] N. G. Polson and J. G. Scott. Default bayesian analysis for multi-way tables: a dataaugmentation approach. arXiv:1109.4180, 2011.
[13] N. G. Polson, J. G. Scott, and J. Windle. Bayesian inference for logistic models using PolyaGamma latent variables. arXiv:1205.0310v2, 2013.
[14] C. P. Robert. Simulation of truncated normal variables. Statistics and Compuating, 5:121?125,
1995.
[15] W. Rudin. Principles of mathematical analysis. McGraw-Hill Book Co., 1964.
[16] A. Smola and S. Narayanamurthy. An architecture for parallel topic models. Very Large Data
Base (VLDB), 2010.
[17] M. A. Tanner and W. H. Wong. The calculation of posterior distributions by data augmentation.
Journal of the Americal Statistical Association, 82(398):528?540, 1987.
[18] D. van Dyk and X. Meng. The art of data augmentation. Journal of Computational and
Graphical Statistics, 10(1):1?50, 2001.
[19] L. Yao, D. Mimno, and A. McCallum. Efficient methods for topic model inference on streaming document collections. In International Conference on Knowledge Discovery and Data
mining (SIGKDD), 2009.
[20] A. Zhang, J. Zhu, and B. Zhang. Sparse online topic models. In International Conference on
World Wide Web (WWW), 2013.
[21] J. Zhu, X. Zheng, and B. Zhang. Improved bayesian supervised topic models with data augmentation. In Annual Meeting of the Association for Computational Linguistics (ACL), 2013.
9
| 4981 |@word repository:1 version:3 proportion:3 nd:27 d2:1 confirms:1 simulation:1 vldb:1 covariance:1 p0:3 pg:15 tr:1 tnlist:1 series:1 efficacy:1 contains:3 lichman:1 document:27 existing:5 wd:2 comparing:1 written:1 subsequent:1 additive:1 partition:1 designed:1 intelligence:2 discovering:1 selected:1 leaf:2 rudin:1 mccallum:2 core:5 blei:5 provides:1 contribute:1 node:7 zhang:5 mathematical:1 along:1 dn:5 direct:1 initiative:1 qualitative:2 introduce:1 pour:1 examine:1 multi:7 wsdm:1 globally:1 automatically:1 resolve:1 cpu:2 equipped:1 becomes:2 provided:1 discover:1 moreover:2 begin:1 developed:3 transformation:2 impractical:1 pseudo:1 quantitative:1 every:1 k2:3 demonstrates:1 normally:5 engineering:1 local:4 treat:2 tsinghua:6 limit:1 perp:4 despite:1 ak:5 analyzing:1 meng:1 becoming:1 ap:1 burn:6 acl:1 china:4 wallach:1 suggests:2 co:1 fastest:1 limited:1 factorization:1 range:1 practical:1 acknowledgment:1 testing:3 practice:3 block:1 implement:1 empirical:5 reject:3 mult:9 significantly:1 word:9 suggest:2 get:2 cannot:1 operator:1 collapsed:2 applying:1 context:1 wong:1 optimize:1 www:1 deterministic:1 truncating:1 splitting:1 immediately:1 mixedmembership:1 subgraphs:3 rule:1 insight:1 holmes:1 handle:1 updated:1 construction:1 hierarchy:1 massive:1 exact:5 smyth:1 element:1 expensive:1 bache:1 observed:2 logisticnormal:2 cloud:1 wang:2 capture:1 solved:1 thousand:3 mentioned:2 intuition:1 nytimes:6 complexity:7 ideally:1 dynamic:4 segment:1 predictive:1 technically:1 efficiency:3 easily:2 joint:1 various:2 fast:5 effective:2 monte:1 artificial:2 newman:1 aggregate:1 hyper:5 whose:1 larger:2 solve:1 drawing:2 otherwise:1 ability:2 statistic:4 unseen:1 asynchronously:1 online:3 advantage:2 analytical:2 net:1 frequent:3 uci:1 loop:1 mixing:3 flexibility:1 achieve:3 subgraph:2 xun:1 scalability:7 convergence:4 cluster:11 nonconjugacy:2 ijcai:1 produce:1 generating:1 converges:1 help:1 develop:5 polya:6 progress:1 eq:4 strong:1 auxiliary:5 c:1 involves:2 come:1 implemented:1 implies:1 differ:1 concentrate:1 posit:1 ning:1 correct:1 human:1 require:1 summation:2 exploring:2 extension:2 around:1 distributively:2 sufficiently:1 normal:21 exp:4 week:1 achieves:1 adopt:2 ctm:10 unwarranted:1 tilting:1 iw:7 tanh:1 largest:2 successfully:1 hoffman:1 clearly:1 gaussian:5 ck:11 rather:1 avoid:2 boosted:1 improvement:1 unsatisfactory:1 likelihood:6 mainly:1 aka:1 tech:1 indicates:1 sigkdd:1 baseline:1 inference:17 streaming:1 inaccurate:1 typically:1 entire:2 accept:3 interested:1 provably:1 overall:1 among:3 unobservable:1 denoted:1 yahoo:1 development:1 plan:1 art:2 marginal:1 field:6 aware:1 ng:2 sampling:27 represents:2 icml:1 future:1 intelligent:1 few:1 randomly:1 gamma:9 national:4 individual:1 consisting:1 wdn:9 huge:1 mining:2 zheng:2 evaluation:2 mixture:1 analyzed:1 held:3 allocating:1 accurate:1 edge:1 desired:1 sacrificing:2 industry:1 modeling:2 ar:4 zn:1 assignment:5 cost:2 subset:1 entry:4 uniform:1 hundred:1 conducted:2 too:1 dir:2 density:1 international:6 sensitivity:2 tanner:1 yao:1 augmentation:14 again:1 central:1 broadcast:1 wishart:3 cb329403:1 book:1 resort:1 suggesting:1 distribute:1 potential:1 explicitly:1 later:1 lab:2 observing:1 thu:1 bayes:1 aggregation:1 parallel:8 identifiability:1 asuncion:1 ctms:2 accuracy:1 variance:2 efficiently:2 bayesian:13 carlo:1 j6:1 za:4 reach:2 parallelizable:1 definition:1 frequency:5 naturally:4 associated:1 popular:2 knowledge:2 infers:1 sophisticated:1 higher:1 supervised:2 nonconjugate:1 response:1 improved:1 zaz:1 formulation:1 done:2 though:3 furthermore:2 smola:2 correlation:17 ckt:2 web:2 expressive:1 assessment:1 logistic:22 lda:17 quality:1 scientific:2 grows:1 building:1 effect:2 hence:3 assigned:2 excluded:1 nonzero:1 iteratively:1 laboratory:1 semantic:3 zaa:3 deal:5 conditionally:2 visualizing:1 criterion:1 generalized:1 hill:1 complete:2 demonstrate:3 performs:1 meaning:1 variational:6 wise:1 novel:1 common:1 wikipedia:2 multinomial:7 insensitive:1 million:3 tail:1 association:2 significant:2 gibbs:17 ai:1 chine:1 paisley:1 narayanamurthy:2 similarly:1 pointed:1 dj:1 similarity:1 base:1 posterior:12 recent:3 moderate:1 perplexity:16 scenario:1 browse:1 binary:4 success:1 vt:1 meeting:1 seen:1 fortunately:1 additional:2 parallelized:2 converge:1 multiple:2 infer:1 faster:3 ahmed:1 bach:1 calculation:1 scalable:12 regression:3 basic:1 cmu:1 arxiv:2 iteration:8 represent:1 proposal:2 dcszb:1 conditionals:1 separately:1 appropriately:1 strict:2 elegant:1 leveraging:1 lafferty:2 jordan:1 leverage:1 ideal:2 split:1 iterate:1 independence:1 variate:2 zi:1 marginalization:1 newsgroups:2 finish:1 architecture:1 idea:2 cn:2 pdim:1 americal:1 handled:1 york:1 generally:1 useful:1 jianfei:1 listed:1 detailed:1 involve:1 tune:1 cb316301:1 locally:1 reduced:1 generate:1 http:1 wiki:1 notice:1 windle:1 per:2 zd:4 discrete:1 promise:1 shall:1 key:2 salient:1 threshold:1 achieving:1 drawn:4 kept:1 graph:7 sum:4 beijing:1 run:1 inverse:3 almost:5 uti:1 home:1 draw:11 doc:1 gonzalez:1 appendix:1 acceptable:1 scaling:1 comparable:1 capturing:1 layer:8 followed:1 annual:1 strength:2 dominated:1 speed:7 min:2 relatively:1 department:1 developing:2 according:1 conjugate:6 kd:29 slightly:1 increasingly:1 em:1 making:3 invariant:1 resource:3 conjugacy:4 remains:2 visualization:1 discus:1 count:3 assures:1 needed:1 apply:2 hierarchical:2 v2:1 appearing:1 coin:1 denotes:5 dirichlet:8 clustering:1 trouble:1 top:3 include:1 graphical:1 marginalized:1 linguistics:1 especially:1 build:2 objective:1 flipping:1 font:1 strategy:1 distance:1 unable:1 outer:1 topic:64 mail:2 cauchy:1 length:2 kk:2 ratio:1 cdk:3 potentially:1 robert:1 polson:3 implementation:4 perform:4 observation:5 datasets:1 benchmark:1 truncated:2 relational:2 communication:1 head:1 zdn:18 aly:1 inferred:1 bk:4 extensive:1 learned:6 hour:1 nip:10 address:2 able:4 usually:1 below:4 scott:2 pnto:1 sparsity:2 challenge:2 program:3 including:2 natural:2 rely:1 synchronize:2 hr:4 zhu:4 scheme:1 improve:3 technology:2 finished:1 admixture:2 jun:1 naive:1 extract:2 dyk:1 prior:21 understanding:1 nice:1 discovery:1 determining:1 synchronization:1 fully:2 limitation:1 allocation:3 proportional:1 foundation:1 integrate:2 article:1 principle:1 cd:2 supported:1 soon:1 enjoys:2 understand:1 wide:2 sparse:2 distributed:4 ghz:1 curve:1 xia:1 vocabulary:1 world:4 mimno:2 default:1 qn:1 van:1 made:2 collection:3 welling:1 approximate:7 mcgraw:1 ml:1 global:3 corpus:8 xi:4 search:2 latent:7 sk:3 table:3 learn:3 complex:1 poly:1 aistats:1 pk:3 main:1 linearly:1 whole:1 big:1 fair:1 x1:1 augmented:3 fig:11 enriched:1 slow:1 precision:1 sub:4 pv:6 exponential:1 theorem:1 discarding:1 dk:30 workshop:1 restricting:2 sequential:2 effectively:2 adding:1 magnitude:2 conditioned:1 chen:2 easier:1 rejection:1 locality:1 broadcasting:1 univariate:1 expressed:1 partially:2 bo:1 dcszj:1 conditional:5 sized:1 twofold:1 considerable:3 hard:2 included:2 infinite:5 specifically:6 reducing:1 determined:1 sampler:12 lemma:3 zb:4 indicating:1 select:1 tested:1 correlated:2 |
4,399 | 4,982 | When Are Overcomplete Topic Models Identifiable?
Uniqueness of Tensor Tucker Decompositions
with Structured Sparsity
Daniel Hsu
Columbia University
New York, NY
[email protected]
Animashree Anandkumar
University of California
Irvine, CA
[email protected]
Sham Kakade
Microsoft Research
Cambridge, MA
[email protected]
Majid Janzamin
University of California
Irvine, CA
[email protected]
Abstract
Overcomplete latent representations have been very popular for unsupervised feature learning in recent years. In this paper, we specify which overcomplete models can be identified given observable moments of a certain order. We consider
probabilistic admixture or topic models in the overcomplete regime, where the
number of latent topics can greatly exceed the size of the observed word vocabulary. While general overcomplete topic models are not identifiable, we establish
generic identifiability under a constraint, referred to as topic persistence. Our sufficient conditions for identifiability involve a novel set of ?higher order? expansion conditions on the topic-word matrix or the population structure of the model.
This set of higher-order expansion conditions allow for overcomplete models, and
require the existence of a perfect matching from latent topics to higher order observed words. We establish that random structured topic models are identifiable
w.h.p. in the overcomplete regime. Our identifiability results allow for general
(non-degenerate) distributions for modeling the topic proportions, and thus, we
can handle arbitrarily correlated topics in our framework. Our identifiability results imply uniqueness of a class of tensor decompositions with structured sparsity
which is contained in the class of Tucker decompositions, but is more general than
the Candecomp/Parafac (CP) decomposition.
Keywords: Overcomplete representation, admixture models, generic identifiability, tensor decomposition.
1 Introduction
A probabilistic framework for incorporating features posits latent or hidden variables that can provide a good explanation to the observed data. Overcomplete probabilistic models can incorporate a
much larger number of latent variables compared to the observed dimensionality. In this paper, we
characterize the conditions under which overcomplete latent variable models can be identified from
their observed moments.
For any parametric statistical model, identifiability is a fundamental question of whether the model
parameters can be uniquely recovered given the observed statistics. Identifiability is crucial in a
number of applications where the latent variables are the quantities of interest, e.g. inferring diseases
1
(latent variables) through symptoms (observations), inferring communities (latent variables) via the
interactions among the actors in a social network (observations), and so on. Moreover, identifiability
can be relevant even in predictive settings, where feature learning is employed for some higher
level task such as classification. For instance, non-identifiability can lead to the presence of nonisolated local optima for optimization-based learning methods, and this can affect their convergence
properties, e.g. see [1].
In this paper, we characterize identifiability for a popular class of latent variable models, known
as the admixture or topic models [2, 3]. These are hierarchical mixture models, which incorporate
the presence of multiple latent states (i.e. topics) in documents consisting of a tuple of observed
variables (i.e. words). In this paper, we characterize conditions under which the topic models are
identified through their observed moments in the overcomplete regime. To this end, we introduce
an additional constraint on the model, referred to as topic persistence. Intuitively, this captures the
?locality? effect among the observed words, and goes beyond the usual ?bag-of-words? or exchangeable topic models. Such local dependencies among observations abound in applications such as text,
images and speech, and can lead to more faithful representation. In addition, we establish that the
presence of topic persistence is central to obtaining model identifiability in the overcomplete regime,
and we provide an in-depth analysis of this phenomenon in this paper.
1.1 Summary of Results
In this paper, we provide conditions for generic1 model identifiability of overcomplete topic models
given observable moments of a certain order (i.e., a certain number of words in each document). We
introduce a novel constraint, referred to as topic persistence, and analyze its effect on identifiability.
We establish identifiability in the presence of a novel combinatorial object, named as perfect n-gram
matching, in the bipartite graph from topics to words (observed variables). Finally, we prove that
random models satisfy these criteria, and are thus identifiable in the overcomplete regime.
Persistent Topic Model: We first introduce the n-persistent topic model, where the parameter n
determines the so-called persistence level of a common topic in a sequence of n successive words, as
seen in Figure 1. The n-persistent model reduces to the popular ?bag-of-words? model, when n = 1,
and to the single topic model (i.e. only one topic in each document) when n ? ?. Intuitively, topic
persistence aids identifiability since we have multiple views of the common hidden topic generating
a sequence of successive words. We establish that the bag-of-words model (with n = 1) is too
non-informative about the topics to be identifiable in the overcomplete regime. On the other hand,
n-persistent overcomplete topic models with n ? 2 are generically identifiable, and we provide a
set of transparent conditions for identifiability.
Deterministic Conditions for Identifiability:
Our sufficient conditions for identifiability are in
h
the form of expansion conditions from the latent
topic space to the observed word space. In the
y2r
y2
y1
overcomplete regime, there are more topics than
words, and thus it is impossible to have expansion
from topics to words. Instead, we impose a novel
expansion constraint from topics to ?higher order?
x(2r?1)n+1 x2rn
x2n
xn xn+1
words, which allows us to handle overcomplete x1
models. We establish that this condition translates
Figure 1: Hierarchical structure of the nto the presence of a novel combinatorial object, persistent topic model. 2rn number of words
referred to as perfect n-gram matching, on the (views) are shown for some integer r ? 1. A sinbipartite graph from topics to words, which encodes gle topic yj , j ? [2r], is chosen for each n succesthe sparsity pattern of the topic-word matrix. sive views {x(j?1)n+1 , . . . , x(j?1)n+n }.
Intuitively, this condition implies ?diversity? of the
word support for different topics which leads to
identifiability. In addition, we present tradeoffs between the topic and word space dimensionality,
topic persistence level, the order of the observed moments at hand, the maximum degree of any
1
A model is generically identifiable, if all the parameters in the parameter space are identifiable, almost
surely. Refer to Definition 1 for more discussion.
2
topic in the bipartite graph, and the Kruskal rank [4] of the topic-word matrix, for identifiability
to hold. We also provide the discussion that how ?1 -based optimization program can recover the
model under additional constraints.
Identifiability of Random Structured Topic Models: We explicitly characterize the regime of
identifiability for the random setting, where each topic i is randomly supported on a set of di words,
i.e. the bipartite graph is a random graph. For this random model with q topics, p-dimensional word
vocabulary, and topic persistence level n, when q = O(pn ) and ?(log p) ? di ? ?(p1/n ), for all
topics i, the topic-word matrix is identifiable from 2nth order observed moments with high probability. Furthermore, we establish that the size condition q = O(pn ) for identifiability is tight.
Implications on Uniqueness of Overcomplete Tucker and CP Tensor Decompositions: We
establish that identifiability of an overcomplete topic model is equivalent to uniqueness of the observed moment tensor (of a certain order) decomposition. Our identifiability results for persistent
topic models imply uniqueness of a structured class of tensor decompositions, which is contained in
the class of Tucker decompositions, but is more general than the candecomp/parafac (CP) decomposition [5]. This sub-class of Tucker decompositions involves structured sparsity and symmetry
constraints on the core tensor, and sparsity constraints on the inverse factors of the decomposition.
Detailed discussion on overview of techniques and related works are provided in the long version [12].
2 Model
Notations: The set {1, 2, . . . , n} is denoted by [n] := {1, 2, . . . , n}. The cardinality of set S is
denoted by |S|. For any vector u (or matrix U ), the support denoted by Supp(u) corresponds to the
location of its non-zero entries. For a vector u ? Rq , Diag(u) ? Rq?q is the diagonal matrix with
u on its main diagonal. The column space of a matrix A is denoted by Col(A). Operators ??? and
??? refer to Kronecker and Khatri-Rao products [6], respectively.
2.1 Persistent topic model
q?1
An admixture model
:= {u ?
Pq specifies a q-dimensional vector of topic proportions h ?p ?
q
R : ui ? 0, i=1 ui = 1} which generates the observed variables xl ? R through vectors
a1 , . . . , aq ? Rp . This collection of vectors ai , i ? [q], is referred to as the population structure or
topic-word matrix [7]. For instance, ai represents the conditional distribution of words given topic
i. The latent variable h is a q dimensional random vector h := [h1 , . . . , hq ]? known as proportion
vector. A prior distribution P (h) over the probability simplex ?q?1 characterizes the prior joint
distribution over the latent variables (topics) hi , i ? [q].
The n-persistent topic model has a three-level multi-view hierarchy in Figure 1. In this model, a
common hidden topic is persistent for a sequence of n words {x(j?1)n+1 , . . . , x(j?1)n+n }, j ? [2r].
Note that the random observed variables (words) are exchangeable within groups of size n, where n
is the persistence level, but are not globally exchangeable.
We now describe a linear representation for the n-persistent topic model, on lines of [9], but with
extensions to incorporate persistence. Each random variable yj , j ? [2r], is a discrete-valued qdimensional random variable encoded by the basis vectors ei , i ? [q], where ei is the i-th basis
vector in Rq with the i-th entry equal to 1 and all the others equal to zero. When yj = ei ? Rq ,
q
then the topic of j-th group of words is i. Given proportion vector
h ? R , topics yj , j ? [2r], are
independently
drawn according to the conditional expectation E yj |h = h, j ? [2r], or equivalently
Pr yj = ei |h = hi , j ? [2r], i ? [q].
Each observed variable xl for l ? [2rn], is a discrete-valued p-dimensional random variable (word)
where p is the size of vocabulary. Again, we assume that variables xl , are encoded by the basis
vectors ek , k ? [p], such that xl = ek ? Rp when the l-th word in the document is k. Given the
3
corresponding topic yj , j ? [2r], words xl , l ? [2rn], are independently drawn according to the
conditional expectation
E x(j?1)n+k |yj = ei = ai , i ? [q], j ? [2r], k ? [n],
where vectors ai ? Rp , i ? [q], are the conditional probability distribution vectors. The matrix
A = [a1 |a2 | ? ? ? |aq ] ? Rp?q collecting these vectors is called population structure or topic-word
matrix.
The (2rn)-th order moment of observed variables xl , l ? [2rn], for some integer r ? 1, is defined
as (in the matrix form)
rn
rn
M2rn (x) := E (x1 ? x2 ? ? ? ? ? xrn )(xrn+1 ? xrn+2 ? ? ? ? ? x2rn )? ? Rp ?p . (1)
For the n-persistent topic model with 2rn number of observations (words) xl , l ? [2rn], the corre(n)
sponding moment is denoted by M2rn (x).
In this paper, we consider the problem of identifiability when exact moments are available.
(n)
Given M2rn (x), what are the sufficient conditions under which the population structure A =
[a1 |a2 | ? ? ? |aq ] ? Rp?q is identifiable? This is answered in Section 3.
3 Sufficient Conditions for Generic Identifiability
In this section, the identifiability result for the n-persistent topic model with access to (2n)-th order
observed moment is provided. First, sufficient deterministic conditions on the population structure
A are provided for identifiability in Theorem 1. Next, the deterministic analysis is specialized to a
random structured model in Theorem 2.
We now make the notion of identifiability precise. As defined in literature, (strict) identifiability
means that the population structure A can be uniquely recovered up to permutation and scaling for
all A ? Rp?q . Instead, we consider a more relaxed notion of identifiability, known as generic
identifiability.
Definition 1 (Generic identifiability). We refer to a matrix A ? Rp?q as generic, with a fixed
sparsity pattern when the nonzero entries of A are drawn from a distribution which is absolutely
continuous with respect to Lebesgue measure 2 . For a given sparsity pattern, the class of population
structure matrices is said to be generically identifiable [10], if all the non-identifiable matrices form
a set of Lebesgue measure zero.
The (2r)-th order moment of hidden variables h ? Rq , denoted by M2r (h), is defined as
r times
z r times
}| { z }| {?
r
r
M2r (h) := E h ? ? ? ? ? h h ? ? ? ? ? h
? Rq ?q .
(2)
Condition 1 (Non-degeneracy). The (2r)-th order moment of hidden variables h ? Rq , defined in
equation (2), is full rank (non-degeneracy of hidden nodes).
Note that there is no hope of distinguishing distinct hidden nodes without this non-degeneracy assumption. We do not impose any other assumption on hidden variables and can incorporate arbitrarily correlated topics.
Furthermore, we can only hope to identify the population structure A up to scaling and permutation.
Therefore, we can identify A up to a canonical form defined as:
Definition 2 (Canonical form). Population structure A is said to be in canonical form if all of its
columns have unit norm.
3.1 Deterministic Conditions for Generic Identifiability
Before providing the main result, a generalized notion of (perfect) matching for bipartite graphs is
defined. We subsequently impose these conditions on the bipartite graph from topics to words which
encodes the sparsity pattern of population structure A.
2
As an equivalent definition, if the non-zero entries of an arbitrary sparse matrix are independently perturbed
with noise drawn from a continuous distribution to generate A, then A is called generic.
4
Generalized matching for bipartite graphs: A bipartite graph with two disjoint vertex sets Y
and X and an edge set E between them is denoted by G(Y, X; E). Given the bi-adjacency matrix
A, the notation G(Y, X; A) is also used to denote a bipartite graph. Here, the rows and columns of
matrix A ? R|X|?|Y | are respectively indexed by X and Y vertex sets. Furthermore, for any subset
S ? Y , the set of neighbors of vertices in S with respect to A is denoted by NA (S).
Definition 3 ((Perfect) n-gram matching). A n-gram matching M for a bipartite graph G(Y, X; E)
is a subset of edges M ? E which satisfies the following conditions. First, for any j ? Y , we
have |NM (j)| ? n. Second, for any j1 , j2 ? Y, j1 6= j2 , we have min{|NM (j1 )|, |NM (j2 )|} >
|NM (j1 ) ? NM (j2 )|.
A perfect n-gram matching or Y -saturating n-gram matching for the bipartite graph G(Y, X; E) is
a n-gram matching M in which each vertex in Y is the end-point of exactly n edges in M .
In words, in a n-gram matching M , each vertex j ? Y is at most the end-point of n edges in M and
for any pair of vertices in Y (j1 , j2 ? Y, j1 6= j2 ), there exists at least one non-common neighbor in
set X for each of them (j1 and j2 ).
Note that 1-gram matching is the same as regular matching for bipartite graphs.
Remark 1 (Necessary size bound). Consider a bipartite graph G(Y,
X; E) with |Y | = q and
|X| = p which has a perfect n-gram matching. Note that there are np n-combinations on X side
and each combination can at most have one neighbor (a node in Y which is connected
to all nodes
in the combination) through the matching, and therefore we necessarily have q ? np .
Identifiability conditions based on existence of perfect n-gram matching in topic-word graph:
Now, we are ready to propose the identifiability conditions and result.
Condition 2 (Perfect n-gram matching on A). The bipartite graph G(Vh , Vo ; A) between hidden
and observed variables, has a perfect n-gram matching.
The above condition implies that the sparsity pattern of matrix A is appropriately scattered in the
mapping from hidden to observed variables to be identifiable. Intuitively, it means that every hidden
node can be distinguished from another hidden node by its unique set of neighbors under the corresponding n-gram matching.
Furthermore, condition 2 is the key to be able to propose identifiability in the overcomplete regime.
As stated in the size bound in Remark 1, for n ? 2, the number of hidden variables can be more
than the number of observed variables and we can still have perfect n-gram matching.
Definition 4 (Kruskal rank, [11]). The Kruskal rank or the krank of matrix A is defined as the
maximum number k such that every subset of k columns of A is linearly independent.
Condition 3 (Krank condition on A). The Kruskal rank of matrix A satisfies the bound krank(A) ?
dmax (A)n , where dmax (A) is the maximum node degree of any column of A.
In the overcomplete regime, it is not possible for A to be full column rank and krank(A) < |Vh | = q.
However, note that a large enough krank ensures that appropriate sized subsets of columns of A are
linearly independent. For instance, when krank(A) > 1, any two columns cannot be collinear and
the above condition rules out the collinear case for identifiability. In the above condition, we see
that a larger krank can incorporate denser connections between topics and words.
The main identifiability result under a fixed graph structure is stated in the following theorem for
n ? 2, where n is the topic persistence level.
(n)
Theorem 1 (Generic identifiability under deterministic topic-word graph structure). Let M2rn (x)
in equation (1) be the (2rn)-th order observed moment of the n-persistent topic model, for some
integer r ? 1. If the model satisfies conditions 1, 2 and 3, then, for any n ? 2, all the columns of
(n)
population structure A are generically identifiable from M2rn (x). Furthermore, the (2r)-th order
moment of the hidden variables, denoted by M2r (h), is also generically identifiable.
The theorem is proved in Appendix A of the long version [12]. It is seen that the population structure
A is identifiable, given any observed moment of order at least 2n. Increasing the order of observed
moment results in identifying higher order moments of the hidden variables.
The above theorem does not cover the case of n = 1. This is the usual bag-of-words admixture
model. Identifiability of this model has been studied earlier [13], and we recall it below.
Remark 2 (Bag-of-words admixture model, [13]). Given (2r)-th order observed moments with
r ? 1, the structure of the popular bag-of-words admixture model and the (2r)-th order moment of
5
hidden variables are identifiable, when A is full column rank and the following expansion condition
holds [13]
|NA (S)| ? |S| + dmax (A),
?S ? Vh , |S| ? 2.
(3)
Our result for n ? 2 in Theorem 1, provides identifiability in the overcomplete regime with weaker
matching condition 2 and krank condition 3. The matching condition 2 is weaker than the above
expansion condition which is based on the perfect matching and hence, does not allow overcomplete
models. Furthermore, the above result for the bag-of-words admixture model requires full column
rank of A which is more stringent than our krank condition 3.
Remark 3 (Recovery using ?1 optimization). It turns out that our conditions for
imply
identifiability
(n)
3
?n
that the columns of the n-gram matrix A , are the sparsest vectors in Col M2n (x) , having a
tensor rank of one. See Appendix A in the long version [12]. This implies recovery of the columns
of A through exhaustive search, which is not efficient. Efficient ?1 -based recovery algorithms have
been analyzed in [13, 14] for the undercomplete case (n = 1). They can be employed here for
recovery from higher order moments as well. Exploiting additional structure present in A?n , for
n > 1, such as rank-1 test devices proposed in [15] are interesting avenues for future investigation.
3.2 Analysis Under Random Topic-Word Graph Structures
In this section, we specialize the identifiability result to the random case. This result is based on more
transparent conditions on the size and the degree of the random bipartite graph G(Vh , Vo ; A). We
consider the random model where in the bipartite graph G(Vh , Vo ; A), each node i ? Vh is randomly
connected to di different nodes in set Vo . Note that this is a heterogeneous degree model.
Condition 4 (Size condition). The random bipartite
n graph G(Vh , Vo ; A) with |Vh | = q, |Vo | = p,
and A ? Rp?q , satisfies the size condition q ? c np for some constant 0 < c < 1.
This size condition is required to establish that the random bipartite graph has a perfect n-gram
matching (and hence satisfies deterministic condition 2). It is shown that the necessary size constraint q = O(pn ) stated in Remark 1, is achieved in the random case. Thus, the above constraint
allows for the overcomplete regime, where q ? p for n ? 2, and is tight.
Condition 5 (Degree condition). In the random bipartite graph G(Vh , Vo ; A) with |Vh | = q, |Vo | =
p, and A ? Rp?q , the degree di of nodes i ? Vh satisfies the lower
upper bounds
dmin
and
?
n?1
1
2
max{1 + ? log p, ? log p} for some constants ? > log
,
?
>
max
2n
+
1
,
2?n
, and
?
log
1/c
c
1
dmax ? (cp) n .
Intuitively, the lower bound on the degree is required to show that the corresponding bipartite graph
G(Vh , Vo ; A) has sufficient number of random edges to ensure that it has perfect n-gram matching with high probability. The upper bound on the degree is mainly required to satisfy the krank
condition 3, where dmax (A)n ? krank(A).
It is important to see that, for n ? 2, the above condition on degree covers a range of models from
sparse to intermediate regimes and it is reasonable in a number of applications that each topic does
not generate a very large number of words.
Probability rate constants: The probability rate of success in the following random identifiability
result is specified by constants ? ? > 0 and ? = ?1 + ?2 > 0 as
? ? = ?? log c ? n + 1,
c
?
e2
?1 = en?1 n?1 +
n? +1 ,
n
1 ? ?1
cn?1 e2
?2 = n
,
n (1 ? ?2 )
?? log 1/c
< ?1 < 1 and
where ?1 and ?2 are some constants satisfying e2 np
1.
3
A?n := A ? ? ? ? ? A.
|
{z
}
n times
6
(4)
(5)
(6)
cn?1 e2 ?? ?
nn p
< ?2 <
h
h
y
x1
xm
xm+1
x2m
y1
ym
ym+1
y2m
x1
xm
xm+1
x2m
(b) Bag-of-words admixture model
(1-persistent topic model)
(a) Single topic model
(infinite-persistent topic model)
Figure 2: Hierarchical structure of the single topic model and bag-of-words admixture model shown for 2m
number of words (views).
(n)
Theorem 2 (Random identifiability). Let M2rn (x) in equation (1) be the (2rn)-th order observed
moment of the n-persistent topic model for some integer r ? 1. If the model with random population
?
structure A satisfies conditions 1, 4 and 5, then whp (with probability at least 1??p?? for constants
? ? > 0 and ? > 0, specified in (4)-(6)), for any n ? 2, all the columns of population structure A are
(n)
identifiable from M2rn (x). Furthermore, the (2r)-th order moment of hidden variables, denoted by
M2r (h), is also identifiable, whp.
The theorem is proved in Appendix B of the long version [12]. Similar to the deterministic analysis,
it is seen that the population structure A is identifiable given any observed moment with order at
least 2n. Increasing the order of observed moment results in identifying higher order moments of
the hidden variables.
The above identifiability theorem only covers for n ? 2 and the n = 1 case is addressed in the
following remark.
Remark 4 (Bag-of-words admixture model). The identifiability result for the random bag-of-words
admixture model is comparable to the result in [14], which considers exact recovery of sparsely-used
dictionaries. They assume that Y = DX is given for some unknown arbitrary dictionary D ? Rq?q
and unknown random sparse coefficient matrix X ? Rq?p . They establish that if D ? Rq?q is
full rank and the random sparse coefficient matrix X ? Rq?p follows the Bernoulli-subgaussian
model with size constraint p > Cq log q and degree constraint O(log q) < E[d] < O(q log q), then
the model is identifiable, whp. Comparing the size and degree constraints, our identifiability result
for n ? 2 requires more stringent upper bound on the degree (d = O(p1/n )), while more relaxed
condition on the size (q = O(pn )) which allows to identifiability in the overcomplete regime.
Remark 5 (The size condition is tight).
The size bound q = O(pn ) in the above theorem achieves
the necessary condition that q ? np = O(pn ) (see Remark 1), and is therefore tight. The sufficiency
is argued in Theorem 3 of the long version [12], where we show that the matching condition 2 holds
under the above size and degree conditions 4 and 5.
4 Why Persistence Helps in Identifiability of Overcomplete Models?
In this section, we provide the moment characterization of the 2-persistent topic model. Then, we
provide a discussion and comparison on why persistence helps in providing identifiability in the
overcomplete regime. The moment characterization and detailed tensor analysis is provided in the
long version [12].
The single topic model (n ? ?) is shown in Figure 2a and the bag-of-words admixture model
(n = 1) is shown in Figure 2b. In order to have a fair comparison among these different models, we
fix the number of observed variables to 4 (case m = 2) and vary the persistence level. Consider three
different models: 2-persistent topic model, single topic model and bag-of-words admixture model
(1-persistent topic model). From moment characterization results provided in the long version [12],
we have the following moment forms for each of these models.
For the 2-persistent topic model with 4 words (r = 1, n = 2), we have
(2)
M4 (x) = (A ? A)E hh? ](A ? A)? .
7
(7)
For the single topic model with 4 words, we have
(?)
M4 (x) = (A ? A) Diag E h] (A ? A)? ,
And for the bag-of-words-admixture model with 4 words (r = 2, n = 1), we have
(1)
M4 (x) = (A ? A)E (h ? h)(h ? h)? (A ? A)? .
(8)
(9)
2
Note that for the single topic model in (8), the Khatri-Rao product matrix A ? A ? Rp ?q has
the same as the number of columns (i.e. the latent dimensionality) of the original matrix A, while
the number of rows (i.e. the observed dimensionality) is increased. Thus, the Khatri-Rao product
?expands? the effect of hidden variables to higher order observed variables, which is the key towards
identifying overcomplete models. In other words, the original overcomplete representation becomes
determined due to the ?expansion effect? of the Khatri-Rao product structure of the higher order
observed moments.
On the other hand, in the bag-of-words admixture model in (9), this interesting ?expansion property?
2
2
does not occur, and we have the Kronecker product A ? A ? Rp ?q , in place of the Khatri-Rao
products. The Kronecker product operation increases both the number of the columns (i.e. latent
dimensionality) and the number of rows (i.e. observed dimensionality), which implies that higher
order moments do not help in identifying overcomplete models.
Finally, Contrasting equation (7) with (8) and (9), we find that the 2-persistent model retains the
desirable property of possessing Khatri-Rao products, while being more general than the form for
single topic model in (8). This key property enables us to establish identifiability of topic models
with finite persistence levels.
Remark 6 (Relationship to tensor decompositions). In the long version of this work [12], we establish that the tensor representation of our model is a special case of the Tucker representation,
but more general than the symmetric CP representation [6]. Therefore, our identifiability results
also imply uniqueness of a class of tensor decompositions with structured sparsity which is contained in the class of Tucker decompositions, but is more general than the Candecomp/Parafac (CP)
decomposition.
5 Proof sketch
(n)
The moment of n-persistent topic model with 2n words is derived as M2n (x) =
?
(A?n ) E hh? (A?n ) ; see [12]. When hidden variables are non-degenerate and A?n is full col
(n)
umn rank, we have Col M2n (x) = Col A?n . Therefore, the problem of recovering A from
(n)
M2n (x) reduces to finding A?n in Col A?n . Then, under the expansion condition 4
(S) ? |S| + dmax A?n , ?S ? Vh , |S| > krank(A),
NA?n
Rest.
we establish that, matrix A is identifiable from Col A?n . This identifiability claim is proved
?n
by showing
are the sparsest and rank-1 (in the tensor form) vectors in
that the columns of A
Col A?n under the sufficient expansion and genericity conditions.
Then, it is established that, sufficient combinatorial conditions on matrix A (conditions 2 and 3)
ensure that the expansion and rank conditions on A?n are satisfied. This is shown by proving that
the existence of perfect n-gram matching on A results in the existence of perfect matching on A?n .
For further discussion on proof techniques, see the long version [12].
Acknowledgments
The authors acknowledge useful discussions with Sina Jafarpour, Adel Javanmard, Alex Dimakis,
Moses Charikar, Sanjeev Arora, Ankur Moitra and Kamalika Chaudhuri. A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-1254106, NSF Award
CCF-1219234, ARO Award W911NF-12-1-0404, and ARO YIP Award W911NF-13-1-0084. M.
Janzamin is supported by NSF Award CCF-1219234, ARO Award W911NF-12-1-0404 and ARO
YIP Award W911NF-13-1-0084.
4
?n
A?n
, in which the redundant rows of A?n are removed.
Rest. is the restricted version of n-gram matrix A
8
References
[1] Andr?e Uschmajew. Local convergence of the alternating least squares algorithm for canonical
tensor approximation. SIAM Journal on Matrix Analysis and Applications, 33(2):639?652,
2012.
[2] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of
Machine Learning Research, 3:993?1022, 2003.
[3] J. K. Pritchard, M. Stephens, and P. Donnelly. Inference of population structure using multilocus genotype data. Genetics, 155:945?959, 2000.
[4] J.B. Kruskal. More factors than subjects, tests and treatments: an indeterminacy theorem for
canonical decomposition and individual differences scaling. Psychometrika, 41(3):281?293,
1976.
[5] Tamara Kolda and Brett Bader. Tensor decompositions and applications. SIREV, 51(3):455?
500, 2009.
[6] G.H. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press,
Baltimore, Maryland, 2012.
[7] XuanLong Nguyen. Posterior contraction of the population polytope in finite admixture models. arXiv preprint arXiv:1206.0068, 2012.
[8] T. Austin et al. On exchangeable random variables and the statistics of large graphs and hypergraphs. Probab. Surv, 5:80?145, 2008.
[9] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor Methods for Learning
Latent Variable Models. Under Review. J. of Machine Learning. Available at arXiv:1210.7559,
Oct. 2012.
[10] Elizabeth S. Allman, John A. Rhodes, and Amelia Taylor. A semialgebraic description of the
general markov model on phylogenetic trees. Arxiv preprint arXiv:1212.1200, Dec. 2012.
[11] J.B. Kruskal. Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear algebra and its applications, 18(2):95?
138, 1977.
[12] A. Anandkumar, D. Hsu, M. Janzamin, and S. Kakade. When are Overcomplete Topic Models
Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity. Preprint
available on arXiv:1308.2853, Aug. 2013.
[13] A. Anandkumar, D. Hsu, A. Javanmard, and S. M. Kakade. Learning Linear Bayesian Networks with Latent Variables. ArXiv e-prints, September 2012.
[14] Daniel A. Spielman, Huan Wang, and John Wright. Exact recovery of sparsely-used dictionaries. ArxXiv preprint, abs/1206.5882, 2012.
[15] L. De Lathauwer, J. Castaing, and J.-F Cardoso. Fourth-order cumulant-based blind identification of underdetermined mixtures. IEEE Tran. on Signal Processing, 55:2965?2973, June
2007.
9
| 4982 |@word faculty:1 version:10 proportion:4 norm:1 decomposition:20 contraction:1 jafarpour:1 moment:33 daniel:2 document:4 recovered:2 com:1 whp:3 comparing:1 dx:1 john:3 j1:7 informative:1 enables:1 device:1 core:1 blei:1 provides:1 characterization:3 node:10 location:1 successive:2 phylogenetic:1 lathauwer:1 persistent:22 prove:1 specialize:1 introduce:3 javanmard:2 nto:1 p1:2 multi:1 globally:1 cardinality:1 increasing:2 abound:1 provided:5 becomes:1 moreover:1 notation:2 psychometrika:1 brett:1 what:1 dimakis:1 contrasting:1 finding:1 every:2 collecting:1 expands:1 exactly:1 exchangeable:4 unit:1 before:1 local:3 studied:1 ankur:1 bi:1 range:1 faithful:1 unique:1 acknowledgment:1 yj:8 matching:28 persistence:15 word:60 regular:1 cannot:1 operator:1 impossible:1 xrn:3 equivalent:2 deterministic:7 go:1 independently:3 identifying:4 recovery:6 rule:1 array:1 population:17 handle:2 notion:3 proving:1 kolda:1 hierarchy:1 exact:3 distinguishing:1 surv:1 satisfying:1 sparsely:2 observed:34 preprint:4 wang:1 capture:1 ensures:1 connected:2 removed:1 disease:1 rq:11 ui:2 complexity:1 tight:4 algebra:1 predictive:1 bipartite:19 basis:3 joint:1 distinct:1 describe:1 y2r:1 exhaustive:1 encoded:2 larger:2 valued:2 denser:1 statistic:3 sequence:3 propose:2 tran:1 aro:4 skakade:1 interaction:1 product:8 j2:7 uci:2 relevant:1 degenerate:2 chaudhuri:1 description:1 exploiting:1 convergence:2 optimum:1 generating:1 perfect:16 telgarsky:1 object:2 help:3 andrew:1 keywords:1 indeterminacy:1 aug:1 recovering:1 c:1 involves:1 implies:4 posit:1 subsequently:1 bader:1 stringent:2 adjacency:1 require:1 argued:1 transparent:2 fix:1 investigation:1 underdetermined:1 extension:1 hold:3 wright:1 mapping:1 claim:1 kruskal:6 dictionary:3 achieves:1 a2:2 vary:1 uniqueness:8 rhodes:1 bag:15 combinatorial:3 djhsu:1 hope:2 pn:6 parafac:3 derived:1 june:1 rank:15 bernoulli:1 mainly:1 greatly:1 inference:1 nn:1 hidden:20 among:4 classification:1 denoted:10 special:1 yip:2 equal:2 having:1 x2n:1 ng:1 represents:1 unsupervised:1 future:1 simplex:1 others:1 np:5 randomly:2 individual:1 m4:3 consisting:1 lebesgue:2 microsoft:3 ab:1 interest:1 golub:1 umn:1 generically:5 mixture:2 analyzed:1 genotype:1 implication:1 tuple:1 edge:5 necessary:3 janzamin:3 huan:1 indexed:1 tree:1 taylor:1 overcomplete:32 instance:3 column:17 modeling:1 earlier:1 rao:6 increased:1 cover:3 w911nf:4 retains:1 vertex:6 entry:4 subset:4 undercomplete:1 too:1 characterize:4 dependency:1 perturbed:1 fundamental:1 siam:1 probabilistic:3 michael:1 ym:2 hopkins:1 na:3 sanjeev:1 again:1 central:1 nm:5 satisfied:1 moitra:1 ek:2 supp:1 diversity:1 de:1 coefficient:2 satisfy:2 explicitly:1 blind:1 view:5 h1:1 analyze:1 characterizes:1 recover:1 identifiability:56 square:1 identify:2 trilinear:1 castaing:1 bayesian:1 identification:1 khatri:6 definition:6 tucker:8 tamara:1 e2:4 proof:2 di:4 degeneracy:3 hsu:4 irvine:2 proved:3 treatment:1 animashree:1 popular:4 recall:1 dimensionality:6 higher:11 specify:1 sufficiency:1 symptom:1 furthermore:7 hand:3 sketch:1 ei:5 effect:4 y2:1 ccf:3 hence:2 alternating:1 symmetric:1 nonzero:1 uniquely:2 criterion:1 generalized:2 vo:9 cp:6 image:1 novel:5 possessing:1 common:4 specialized:1 overview:1 hypergraphs:1 refer:3 cambridge:1 ai:4 aq:3 pq:1 access:1 actor:1 sive:1 posterior:1 recent:1 certain:4 arbitrarily:2 success:1 seen:3 additional:3 relaxed:2 impose:3 employed:2 surely:1 redundant:1 signal:1 arithmetic:1 stephen:1 multiple:2 full:6 sham:1 reduces:2 desirable:1 x2rn:2 long:9 award:7 a1:3 heterogeneous:1 expectation:2 arxiv:7 sponding:1 achieved:1 dec:1 addition:2 fellowship:1 addressed:1 baltimore:1 crucial:1 appropriately:1 rest:2 strict:1 majid:1 subject:1 jordan:1 anandkumar:6 integer:4 subgaussian:1 allman:1 presence:5 exceed:1 intermediate:1 enough:1 affect:1 identified:3 cn:2 avenue:1 tradeoff:1 translates:1 whether:1 collinear:2 adel:1 speech:1 york:1 remark:10 useful:1 detailed:2 involve:1 xuanlong:1 cardoso:1 gle:1 generate:2 specifies:1 canonical:5 nsf:3 andr:1 moses:1 disjoint:1 discrete:2 group:2 key:3 donnelly:1 drawn:4 graph:26 year:1 inverse:1 fourth:1 multilocus:1 named:1 place:1 almost:1 reasonable:1 appendix:3 scaling:3 comparable:1 bound:8 hi:2 corre:1 identifiable:23 occur:1 constraint:12 kronecker:3 alex:1 x2:1 encodes:2 generates:1 answered:1 min:1 structured:9 charikar:1 according:2 combination:3 elizabeth:1 kakade:4 m2n:4 intuitively:5 restricted:1 pr:1 equation:4 dmax:6 turn:1 hh:2 ge:1 end:3 available:3 operation:1 hierarchical:3 generic:9 appropriate:1 distinguished:1 rp:12 existence:4 original:2 dirichlet:1 ensure:2 establish:13 tensor:17 question:1 quantity:1 print:1 parametric:1 usual:2 diagonal:2 said:2 september:1 hq:1 maryland:1 topic:89 polytope:1 considers:1 relationship:1 cq:1 providing:2 equivalently:1 stated:3 unknown:2 upper:3 dmin:1 observation:4 markov:1 finite:2 acknowledge:1 precise:1 y1:2 rn:11 pritchard:1 arbitrary:2 community:1 david:1 pair:1 required:3 specified:2 amelia:1 connection:1 california:2 established:1 beyond:1 able:1 below:1 pattern:5 xm:4 candecomp:3 regime:15 sparsity:11 program:1 max:2 explanation:1 nth:1 imply:4 arora:1 admixture:17 ready:1 columbia:2 vh:13 text:1 prior:2 literature:1 probab:1 review:1 permutation:2 interesting:2 allocation:1 semialgebraic:1 degree:13 sufficient:8 row:4 austin:1 genetics:1 summary:1 supported:3 side:1 allow:3 weaker:2 neighbor:4 sparse:4 van:1 depth:1 vocabulary:3 gram:20 xn:2 author:1 collection:1 nguyen:1 social:1 observable:2 x2m:2 continuous:2 latent:19 search:1 why:2 ca:2 career:1 y2m:1 obtaining:1 symmetry:1 expansion:12 necessarily:1 diag:2 m2r:4 main:3 linearly:2 noise:1 fair:1 x1:4 referred:5 en:1 scattered:1 ny:1 aid:1 sub:1 inferring:2 sparsest:2 col:8 xl:7 theorem:13 showing:1 incorporating:1 exists:1 kamalika:1 genericity:1 locality:1 contained:3 saturating:1 corresponds:1 determines:1 satisfies:7 ma:1 oct:1 conditional:4 sized:1 towards:1 loan:1 infinite:1 determined:1 called:3 support:2 sina:1 cumulant:1 absolutely:1 spielman:1 incorporate:5 phenomenon:1 correlated:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.